Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

DiskSpd, PowerShell and storage performance: measuring IOPs, throughput and latency for both local disks and SMB file shares

$
0
0

 

1. Introduction

 

I have been doing storage-related demos and publishing blogs with some storage performance numbers for a while, and I commonly get questions such as “How do you run these tests?” or “What tools do you use to generate IOs for your demos?”. While it’s always best to use a real workload to test storage, sometimes that is not convenient. In the past, I frequently used and recommended a free tool from Microsoft to simulate IOs called SQLIO. However, there is a better tool that was recently released by Microsoft called DiskSpd. This is a flexible tool that can simulate many different types of workloads. And you can apply it to several configurations, from a physical host or virtual machine, using all kinds of storage, including local disks, LUNs on a SAN, Storage Spaces or SMB file shares.

 

2. Download the tool

 

To get started, you need to download and install the DiskSpd. You can get the tool from http://aka.ms/DiskSpd. It comes in the form of a ZIP file that you can open and copy local folder. There are actually 3 subfolders with different versions of the tool included in the ZIP file: amd64fre (for 64-bit systems), x86fre (for 32-bit systems) and armfre (for ARM systems). This allows you to run it in pretty much every Windows version, client or server.

In the end, you really only need one of the versions of DiskSpd.EXE files included in the ZIP (the one that best fits your platform). If you’re using a recent version of Windows Server, you probably want the version in the amd64fre folder. In this blog post, I assume that you copied the correct version of DiskSpd.EXE to the C:\DiskSpd local folder.

If you're a developer, you might also want to take a look at the source code for DiskSpd. You can find that at https://github.com/microsoft/diskspd.

 

3. Run the tool

 

When you’re ready to start running DiskSpd, you want to make sure there’s nothing else running on the computer. Other running process can interfere with your results by putting additional load on the CPU, network or storage. If the disk you are using is shared in any way (like a LUN on a SAN), you want to make sure that nothing else is competing with your testing. If you’re using any form of IP storage (iSCSI LUN, SMB file share), you want to make sure that you’re not running on a network congested with other kinds of traffic.

WARNING: You could be generating a whole lot of disk IO, network traffic and/or CPU load when you run DiskSpd. If you’re in a shared environment, you might want to talk to your administrator and ask permission. This could generate a whole lot of load and disturb anyone else using other VMs in the same host, other LUNs on the same SAN or other traffic on the same network.

WARNING: If you use DiskSpd to write data to a physical disk, you might destroy the data on that disk. DiskSpd does not ask for confirmation. It assumes you know what you are doing. Be careful when using physical disks (as opposed to files) with DiskSpd.

NOTE: You should run DiskSpd from an elevated command prompt. This will make sure file creation is fast. Otherwise, DiskSpd will fall back to a slower method of creating files. In the example below, when you're using a 1TB file, that might take a long time.

From an old command prompt or a PowerShell prompt, issue a single command line to start getting some performance results. Here is your first example using 8 threads of execution, each generating 8 outstanding random 8KB unbuffered read IOs:

PS C:\DiskSpd> C:\DiskSpd\diskspd.exe -c1000G -d10 -r -w0 -t8 -o8 -b8K -h -L X:\testfile.dat

Command Line: C:\DiskSpd\diskspd.exe -c1000G -d10 -r -w0 -t8 -o8 -b8K -h -L X:\testfile.dat

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: 'X:\testfile.dat'
                think time: 0ms
                burst size: 0
                software and hardware cache disabled
                performing read test
                block size: 8192
                using random I/O (alignment: 8192)
                number of outstanding I/O operations: 8
                stride size: 8192
                thread stride size: 0
                threads per file: 8
                using I/O Completion Ports
                IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:       10.01s
thread count:           8
proc count:             4

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|   5.31%|   0.16%|    5.15%|  94.76%
   1|   1.87%|   0.47%|    1.40%|  98.19%
   2|   1.25%|   0.16%|    1.09%|  98.82%
   3|   2.97%|   0.47%|    2.50%|  97.10%
-------------------------------------------
avg.|   2.85%|   0.31%|    2.54%|  97.22%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |        20480000 |         2500 |       1.95 |     249.77 |   32.502 |    55.200 | X:\testfile.dat (1000GB)
     1 |        20635648 |         2519 |       1.97 |     251.67 |   32.146 |    54.405 | X:\testfile.dat (1000GB)
     2 |        21094400 |         2575 |       2.01 |     257.26 |   31.412 |    53.410 | X:\testfile.dat (1000GB)
     3 |        20553728 |         2509 |       1.96 |     250.67 |   32.343 |    56.548 | X:\testfile.dat (1000GB)
     4 |        20365312 |         2486 |       1.94 |     248.37 |   32.599 |    54.448 | X:\testfile.dat (1000GB)
     5 |        20160512 |         2461 |       1.92 |     245.87 |   32.982 |    54.838 | X:\testfile.dat (1000GB)
     6 |        19972096 |         2438 |       1.90 |     243.58 |   33.293 |    55.178 | X:\testfile.dat (1000GB)
     7 |        19578880 |         2390 |       1.87 |     238.78 |   33.848 |    58.472 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:         162840576 |        19878 |      15.52 |    1985.97 |   32.626 |    55.312

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |        20480000 |         2500 |       1.95 |     249.77 |   32.502 |    55.200 | X:\testfile.dat (1000GB)
     1 |        20635648 |         2519 |       1.97 |     251.67 |   32.146 |    54.405 | X:\testfile.dat (1000GB)
     2 |        21094400 |         2575 |       2.01 |     257.26 |   31.412 |    53.410 | X:\testfile.dat (1000GB)
     3 |        20553728 |         2509 |       1.96 |     250.67 |   32.343 |    56.548 | X:\testfile.dat (1000GB)
     4 |        20365312 |         2486 |       1.94 |     248.37 |   32.599 |    54.448 | X:\testfile.dat (1000GB)
     5 |        20160512 |         2461 |       1.92 |     245.87 |   32.982 |    54.838 | X:\testfile.dat (1000GB)
     6 |        19972096 |         2438 |       1.90 |     243.58 |   33.293 |    55.178 | X:\testfile.dat (1000GB)
     7 |        19578880 |         2390 |       1.87 |     238.78 |   33.848 |    58.472 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:         162840576 |        19878 |      15.52 |    1985.97 |   32.626 |    55.312

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     1 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     2 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     3 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     4 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     5 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     6 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     7 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      3.360 |        N/A |      3.360
   25th |      5.031 |        N/A |      5.031
   50th |      8.309 |        N/A |      8.309
   75th |     12.630 |        N/A |     12.630
   90th |    148.845 |        N/A |    148.845
   95th |    160.892 |        N/A |    160.892
   99th |    172.259 |        N/A |    172.259
3-nines |    254.020 |        N/A |    254.020
4-nines |    613.602 |        N/A |    613.602
5-nines |    823.760 |        N/A |    823.760
6-nines |    823.760 |        N/A |    823.760
7-nines |    823.760 |        N/A |    823.760
8-nines |    823.760 |        N/A |    823.760
    max |    823.760 |        N/A |    823.760

NOTE: The -w0 is the default, so you could skip it. I'm keeping it here to be explicit about the fact we're doing all reads.

For this specific disk, I am getting 1,985 IOPS, 15.52 MB/sec of average throughput and 32.626 milliseconds of average latency. I’m getting all that information from the blue line above.

d caching or software caching.

That average latency looks high for small IOs (even though this is coming from a set of HDDs), but we’ll examine that later.

Now, let’s try now another command using sequential 512KB reads on that same file. I’ll use 2 threads with 8 outstanding IOs per thread this time:

PS C:\DiskSpd> C:\DiskSpd\diskspd.exe -c1000G -d10 -w0 -t2 -o8 -b512K -h -L X:\testfile.dat

Command Line: C:\DiskSpd\diskspd.exe -c1000G -d10 -w0 -t2 -o8 -b512K -h -L X:\testfile.dat

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: 'X:\testfile.dat'
                think time: 0ms
                burst size: 0
                software and hardware cache disabled
                performing read test
                block size: 524288
                number of outstanding I/O operations: 8
                stride size: 524288
                thread stride size: 0
                threads per file: 2
                using I/O Completion Ports
                IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:       10.00s
thread count:           2
proc count:             4

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|   4.53%|   0.31%|    4.22%|  95.44%
   1|   1.25%|   0.16%|    1.09%|  98.72%
   2|   0.00%|   0.00%|    0.00%|  99.97%
   3|   0.00%|   0.00%|    0.00%|  99.97%
-------------------------------------------
avg.|   1.44%|   0.12%|    1.33%|  98.52%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       886046720 |         1690 |      84.47 |     168.95 |   46.749 |    47.545 | X:\testfile.dat (1000GB)
     1 |       851443712 |         1624 |      81.17 |     162.35 |   49.497 |    54.084 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:        1737490432 |         3314 |     165.65 |     331.29 |   48.095 |    50.873

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       886046720 |         1690 |      84.47 |     168.95 |   46.749 |    47.545 | X:\testfile.dat (1000GB)
     1 |       851443712 |         1624 |      81.17 |     162.35 |   49.497 |    54.084 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:        1737490432 |         3314 |     165.65 |     331.29 |   48.095 |    50.873

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
     1 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      9.406 |        N/A |      9.406
   25th |     31.087 |        N/A |     31.087
   50th |     38.397 |        N/A |     38.397
   75th |     47.216 |        N/A |     47.216
   90th |     64.783 |        N/A |     64.783
   95th |     90.786 |        N/A |     90.786
   99th |    356.669 |        N/A |    356.669
3-nines |    452.198 |        N/A |    452.198
4-nines |    686.307 |        N/A |    686.307
5-nines |    686.307 |        N/A |    686.307
6-nines |    686.307 |        N/A |    686.307
7-nines |    686.307 |        N/A |    686.307
8-nines |    686.307 |        N/A |    686.307
    max |    686.307 |        N/A |    686.307

With that configuration and parameters, I got about 165.65 MB/sec of throughput with an average latency of 48.095 milliseconds per IO. Again, that latency sounds high even for 512KB IOs and we’ll dive into that topic later on.

 

5. Understand the parameters used

 

Now let’s inspect the parameters on those DiskSpd command lines. I know it’s a bit overwhelming at first, but you will get used to it. And keep in mind that, for DiskSpd parameters, lowercase and uppercase mean different things, so be very careful.

Here is the explanation for the parameters used above:

PS C:\> C:\DiskSpd\diskspd.exe -c1G -d10 -r -w0 -t8 -o8 -b8K -h -L X:\testfile.dat

ParameterDescriptionNotes
-cSize of file used.Specify the number of bytes or use suffixes like K, M or G (KB, MB, or GB). You should use a large size (all of the disk) for HDDs, since small files will show unrealistically high performance (short stroking).
-dThe duration of the test, in seconds.You can use 10 seconds for a quick test. For any serious work, use at least 60 seconds.
-wPercentage of writes.0 means all reads, 100 means all writes, 30 means 30% writes and 70% reads. Be careful with using writes on SSDs for a long time, since they can wear out the drive. The default is 0.
-rRandomRandom is common for OLTP workloads. Sequential (when –r is not specified) is common for Reporting, Data Warehousing.
-bSize of the IO in KBSpecify the number of bytes or use suffixes like K, M or G (KB, MB, or GB). 8K is the typical IO for OLTP workloads. 512K is common for Reporting, Data Warehousing.
-tThreads per fileFor large IOs, just a couple is enough. Sometimes just one. For small IOs, you could need as many as the number of CPU cores.
-oOutstanding IOs or queue depth (per thread)In RAID, SAN or Storage Spaces setups, a single disk can be made up of multiple physical disks. You can start with twice the number of physical disks used by the volume where the file sits. Using a higher number will increase your latency, but can get you more IOPs and throughput.
-LCapture latency informationAlways important to know the average time to complete an IO, end-to-end.
-hDisable hardware and software cachingNo hardware or software buffering. Buffering plus a small file size will give you performance of the memory, not the disk.

 

For OLTP workloads, I commonly start with 8KB random IOs, 8 threads, 16 outstanding per thread. 8KB is the size of the page used by SQL Server for its data files. In parameter form, that would be: –r -b8K -t8 -o16. For reporting or OLAP workloads with large IO, I commonly start with 512KB IOs, 2 threads and 16 outstanding per thread. 512KB is a common IO size when SQL Server loads a batch of 64 data pages when using the read ahead technique for a table scan. In parameter form, that would be: -b512K -t2 -o16. These numbers will need to be adjusted if your machine has many cores and/or if you volume is backed up by a large number of physical disks.

If you’re curious, here are more details about parameters for DiskSpd, coming from the tool’s help itself:

PS C:\> C:\DiskSpd\diskspd.exe

Usage: C:\DiskSpd\diskspd.exe [options] target1 [ target2 [ target3 ...] ]
version 2.0.12 (2014/09/17)

Available targets:
       file_path
       #
       :

Available options:
  -?                 display usage information
  -a#[,#[...]]       advanced CPU affinity - affinitize threads to CPUs provided after -a
                       in a round-robin manner within current KGroup (CPU count starts with 0); the same CPU
                       can be listed more than once and the number of CPUs can be different
                       than the number of files or threads (cannot be used with -n)
  -ag                group affinity - affinitize threads in a round-robin manner across KGroups
  -b[K|M|G]    block size in bytes/KB/MB/GB [default=64K]
  -B[K|M|G|b]  base file offset in bytes/KB/MB/GB/blocks [default=0]
                       (offset from the beginning of the file)
  -c[K|M|G|b]  create files of the given size.
                       Size can be stated in bytes/KB/MB/GB/blocks
  -C        cool down time - duration of the test after measurements finished [default=0s].
  -D Print IOPS standard deviations. The deviations are calculated for samples of duration .
                       is given in milliseconds and the default value is 1000.
  -d        duration (in seconds) to run test [default=10s]
  -f[K|M|G|b]  file size - this parameter can be used to use only the part of the file/disk/partition
                       for example to test only the first sectors of disk
  -fr                open file with the FILE_FLAG_RANDOM_ACCESS hint
  -fs                open file with the FILE_FLAG_SEQUENTIAL_SCAN hint
  -F          total number of threads (cannot be used with -t)
  -g   throughput per thread is throttled to given bytes per millisecond
                       note that this can not be specified when using completion routines
  -h                 disable both software and hardware caching
  -i          number of IOs (burst size) before thinking. must be specified with -j
  -j       time to think in ms before issuing a burst of IOs (burst size). must be specified with -i
  -I       Set IO priority to . Available values are: 1-very low, 2-low, 3-normal (default)
  -l                 Use large pages for IO buffers
  -L                 measure latency statistics
  -n                 disable affinity (cannot be used with -a)
  -o          number of overlapped I/O requests per file per thread
                       (1=synchronous I/O, unless more than 1 thread is specified with -F)
                       [default=2]
  -p                 start async (overlapped) I/O operations with the same offset
                       (makes sense only with -o2 or grater)
  -P          enable printing a progress dot after each completed I/O operations
                       (counted separately by each thread) [default count=65536]
  -r[K|M|G|b] random I/O aligned to bytes (doesn't make sense with -s).
                       can be stated in bytes/KB/MB/GB/blocks
                       [default access=sequential, default alignment=block size]
  -R       output format. Default is text.
  -s[K|M|G|b]  stride size (offset between starting positions of subsequent I/O operations)
  -S                 disable OS caching
  -t          number of threads per file (cannot be used with -F)
  -T[K|M|G|b]  stride between I/O operations performed on the same file by different threads
                       [default=0] (starting offset = base file offset + (thread number * )
                       it makes sense only with -t or -F
  -v                 verbose mode
  -w     percentage of write requests (-w and -w0 are equivalent).
                     absence of this switch indicates 100% reads
                       IMPORTANT: Your data will be destroyed without a warning
  -W        warm up time - duration of the test before measurements start [default=5s].
  -x                 use completion routines instead of I/O Completion Ports
  -X           use an XML file for configuring the workload. Cannot be used with other parameters.
  -z                 set random seed [default=0 if parameter not provided, GetTickCount() if value not provided]

Write buffers:
  -Z                        zero buffers used for write tests
  -Z[K|M|G|b]         use a global buffer filled with random data as a source for write operations.
  -Z[K|M|G|b],  use a global buffer filled with data from as a source for write operations.
                              If is smaller than , its content will be repeated multiple times in the buffer.

  By default, the write buffers are filled with a repeating pattern (0, 1, 2, ..., 255, 0, 1, ...)

Synchronization:
  -ys     signals event before starting the actual run (no warmup)
                       (creates a notification event if does not exist)
  -yf     signals event after the actual run finishes (no cooldown)
                       (creates a notification event if does not exist)
  -yr     waits on event before starting the run (including warmup)
                       (creates a notification event if does not exist)
  -yp     allows to stop the run when event is set; it also binds CTRL+C to this event
                       (creates a notification event if does not exist)
  -ye     sets event and quits

Event Tracing:
  -ep                   use paged memory for NT Kernel Logger (by default it uses non-paged memory)
  -eq                   use perf timer
  -es                   use system timer (default)
  -ec                   use cycle count
  -ePROCESS             process start & end
  -eTHREAD              thread start & end
  -eIMAGE_LOAD          image load
  -eDISK_IO             physical disk IO
  -eMEMORY_PAGE_FAULTS  all page faults
  -eMEMORY_HARD_FAULTS  hard faults only
  -eNETWORK             TCP/IP, UDP/IP send & receive
  -eREGISTRY            registry calls

Examples:

Create 8192KB file and run read test on it for 1 second:

  C:\DiskSpd\diskspd.exe -c8192K -d1 testfile.dat

Set block size to 4KB, create 2 threads per file, 32 overlapped (outstanding)
I/O operations per thread, disable all caching mechanisms and run block-aligned random
access read test lasting 10 seconds:

  C:\DiskSpd\diskspd.exe -b4K -t2 -r -o32 -d10 -h testfile.dat

Create two 1GB files, set block size to 4KB, create 2 threads per file, affinitize threads
to CPUs 0 and 1 (each file will have threads affinitized to both CPUs) and run read test
lasting 10 seconds:

  C:\DiskSpd\diskspd.exe -c1G -b4K -t2 -d10 -a0,1 testfile1.dat testfile2.dat

 

6. Tune the parameters for large sequential IO

 

Now that you have the basics down, we can spend some time looking at how you can refine your number of threads and queue depth for your specific configuration. This might help us figure out why we had those higher than expected latency numbers in the initial runs. You basically need to experiment with the -t and the -o parameters until you find the one that give you the best results. You first want to find out the latency for a given system with a queue depth of 1. Then you can increase the queue depth and check what happens in terms of IOPs, throughput and latency.

Keep in mind that many logical (and “physical”) disks may have multiple IO paths.  That’s the case in the examples mentioned here, but also true for most cloud storage systems and some physical drives (especially SSDs).  In general, increasing outstanding IOs will have minimal impact on latency until the IO paths start to saturate. Then latency will start to increase dramatically.

Here’s a sample script that measures queue depth from 1 to 16, parsing the output of DiskSpd to give us just the information we need. The results for each DiskSpd run are stored in the $result variable and parsed to show IOPs, throughput, latency and CPU usage on a single line. There is some fun string parsing going on there, first to find the line that contains the information we’re looking for, and then using the Split() function to break that line into the individual metrics we need. DiskSpd has the -Rxml option to output XML instead of text, but for me it was easier to parse the text.

1..16 | % { 
   $param = "-o $_"
   $result = C:\DiskSpd\diskspd.exe -c1000G -d10 -w0 -t1 $param -b512K -h -L X:\testfile.dat
   foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
   foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
   $mbps = $total.Split("|")[2].Trim()
   $iops = $total.Split("|")[3].Trim()
   $latency = $total.Split("|")[4].Trim()
   $cpu = $avg.Split("|")[1].Trim()
   "Param $param, $iops iops, $mbps MB/sec, $latency ms, $cpu CPU"
}

Here is the output:

Param -o 1, 61.01 iops, 30.50 MB/sec, 16.355 ms, 0.20% CPU
Param -o 2, 140.99 iops, 70.50 MB/sec, 14.143 ms, 0.31% CPU
Param -o 3, 189.00 iops, 94.50 MB/sec, 15.855 ms, 0.47% CPU
Param -o 4, 248.20 iops, 124.10 MB/sec, 16.095 ms, 0.47% CPU
Param -o 5, 286.45 iops, 143.23 MB/sec, 17.431 ms, 0.94% CPU
Param -o 6, 316.05 iops, 158.02 MB/sec, 19.052 ms, 0.78% CPU
Param -o 7, 332.51 iops, 166.25 MB/sec, 21.059 ms, 0.66% CPU
Param -o 8, 336.16 iops, 168.08 MB/sec, 23.875 ms, 0.82% CPU
Param -o 9, 339.95 iops, 169.97 MB/sec, 26.482 ms, 0.55% CPU
Param -o 10, 340.93 iops, 170.46 MB/sec, 29.373 ms, 0.70% CPU
Param -o 11, 338.58 iops, 169.29 MB/sec, 32.567 ms, 0.55% CPU
Param -o 12, 344.98 iops, 172.49 MB/sec, 34.675 ms, 1.09% CPU
Param -o 13, 332.09 iops, 166.05 MB/sec, 39.190 ms, 0.82% CPU
Param -o 14, 341.05 iops, 170.52 MB/sec, 41.127 ms, 1.02% CPU
Param -o 15, 339.73 iops, 169.86 MB/sec, 44.037 ms, 0.39% CPU
Param -o 16, 335.43 iops, 167.72 MB/sec, 47.594 ms, 0.86% CPU

For large sequential IOs, we typically want to watch the throughput (in MB/sec). There is a significant increase until we reach 6 outstanding IOs, which gives us around 158 MB/sec with 19 millisecond of latency per IO. You can clearly see that if you don’t queue up some IO, you’re not extracting the full throughput of this disk, since we’ll be processing the data while the disks are idle waiting for more work. If we queue more than 6 IOs, we really don’t get much more throughput, we only manage to increase the latency, as the disk subsystem is unable to give you much more throughput. You can queue up 10 IOs to reach 170 MB/sec, but we increase latency to nearly 30 milliseconds (a latency increase of 50% for a gain of only 8% in throughput).

At this point, it is clear that using multiple outstanding IOs is a great idea. However, using more outstanding IOs than what your target application can drive will be misleading as it will achieve throughput the application isn’t architected to achieve.  Using less outstanding IOs than what the application can drive may lead to an incorrect conclusion that the disk can’t achieve the necessary throughput, because the full parallelism of the disk isn’t being utilized. You should try to find what your specific application does to make sure that your DiskSpd simulation is a good approximation of your real workload.

So, looking at the data above, we can conclude that 6 outstanding IOs is a reasonable number for this storage subsystem. Now we can see if we can gain by spreading the work across multiple threads. What we want to avoid here is bottlenecking on a single CPU core, which is very common we doing lots and lots of IO. A simple experiment is to double the number of threads while reducing the queue depth by half.  Let’s now try 2 threads instead of 1.

1..8 | % { 
   $param = "-o $_"
   $result = C:\DiskSpd\diskspd.exe -c1000G -d10 -w0 -t2 $param -b512K -h -L X:\testfile.dat
   foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
   foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
   $mbps = $total.Split("|")[2].Trim()
   $iops = $total.Split("|")[3].Trim()
   $latency = $total.Split("|")[4].Trim()
   $cpu = $avg.Split("|")[1].Trim()
   "Param –t2 $param, $iops iops, $mbps MB/sec, $latency ms, $cpu CPU"
}

Here is the output with 2 threads and a queue depth of 1:

Param –t2 -o 1, 162.01 iops, 81.01 MB/sec, 12.500 ms, 0.35% CPU
Param –t2 -o 2, 250.47 iops, 125.24 MB/sec, 15.956 ms, 0.82% CPU
Param –t2 -o 3, 312.52 iops, 156.26 MB/sec, 19.137 ms, 0.98% CPU
Param –t2 -o 4, 331.28 iops, 165.64 MB/sec, 24.136 ms, 0.82% CPU
Param –t2 -o 5, 342.45 iops, 171.23 MB/sec, 29.180 ms, 0.74% CPU
Param –t2 -o 6, 340.59 iops, 170.30 MB/sec, 35.391 ms, 1.17% CPU
Param –t2 -o 7, 337.75 iops, 168.87 MB/sec, 41.400 ms, 1.05% CPU
Param –t2 -o 8, 336.15 iops, 168.08 MB/sec, 47.859 ms, 0.90% CPU

Well, it seems like we were not bottlenecked on CPU after all (we sort of knew that already). So, with 2 threads and 3 outstanding IOs per thread, we effective get 6 total outstanding IOs and the performance numbers match what we got with 1 thread and queue depth of 6 in terms of throughput and latency. That pretty much proves that 1 thread was enough for this kind of configuration and workload and that increasing the number of threads yields no gain. This is not surprising for large IO. However, for smaller IO size, the CPU is more taxed and we might hit a single core bottleneck. We can look at the full DiskSpd output to confirm that no single core has pegged with 1 thread:

PS C:\DiskSpd> C:\DiskSpd\diskspd.exe -c1000G -d10 -w0 -t1 -o6 -b512K -h -L X:\testfile.dat

Command Line: C:\DiskSpd\diskspd.exe -c1000G -d10 -w0 -t1 -o6 -b512K -h -L X:\testfile.dat

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: 'X:\testfile.dat'
                think time: 0ms
                burst size: 0
                software and hardware cache disabled
                performing read test
                block size: 524288
                number of outstanding I/O operations: 6
                stride size: 524288
                thread stride size: 0
                threads per file: 1
                using I/O Completion Ports
                IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:       10.00s
thread count:           1
proc count:             4

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|   2.03%|   0.16%|    1.87%|  97.96%
   1|   0.00%|   0.00%|    0.00%|  99.84%
   2|   0.00%|   0.00%|    0.00%| 100.15%
   3|   0.00%|   0.00%|    0.00%| 100.31%
-------------------------------------------
avg.|   0.51%|   0.04%|    0.47%|  99.56%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      1664614400 |         3175 |     158.74 |     317.48 |   18.853 |    21.943 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:        1664614400 |         3175 |     158.74 |     317.48 |   18.853 |    21.943

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      1664614400 |         3175 |     158.74 |     317.48 |   18.853 |    21.943 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:        1664614400 |         3175 |     158.74 |     317.48 |   18.853 |    21.943

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      7.743 |        N/A |      7.743
   25th |     13.151 |        N/A |     13.151
   50th |     15.301 |        N/A |     15.301
   75th |     17.777 |        N/A |     17.777
   90th |     22.027 |        N/A |     22.027
   95th |     29.791 |        N/A |     29.791
   99th |    102.261 |        N/A |    102.261
3-nines |    346.305 |        N/A |    346.305
4-nines |    437.603 |        N/A |    437.603
5-nines |    437.603 |        N/A |    437.603
6-nines |    437.603 |        N/A |    437.603
7-nines |    437.603 |        N/A |    437.603
8-nines |    437.603 |        N/A |    437.603
    max |    437.603 |        N/A |    437.603

This confirms we’re not bottleneck on any of CPU cores. You can see above that the busiest CPU core is at only around 2% use.

 

7. Tune queue depth for small random IOs

 

Performing the same tuning exercise for small random IOS is typically more interesting, especially when you have fast storage. For this one, we’ll continue to use the same PowerShell script. However, for small IOs, we’ll try a larger number for queue depth. This might take a while to run, though… Here’s a script that you can run from a PowerShell prompt, trying out many different queue depths:

1..24 | % { 
   $param = "-o $_"
   $result = C:\DiskSpd\DiskSpd.exe -c1000G -d10 -w0 -r -b8k $param -t1 -h -L X:\testfile.dat
   foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
   foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
   $mbps = $total.Split("|")[2].Trim()
   $iops = $total.Split("|")[3].Trim()
   $latency = $total.Split("|")[4].Trim()
   $cpu = $avg.Split("|")[1].Trim()  
   "Param $param, $iops iops, $mbps MB/sec, $latency ms, $cpu CPU"
}

As you can see, the  script runs DiskSpd 24 times, using different queue depths. Here’s the sample output:

Param -o 1, 191.06 iops, 1.49 MB/sec, 5.222 ms, 0.27% CPU
Param -o 2, 361.10 iops, 2.82 MB/sec, 5.530 ms, 0.82% CPU
Param -o 3, 627.30 iops, 4.90 MB/sec, 4.737 ms, 1.02% CPU
Param -o 4, 773.70 iops, 6.04 MB/sec, 5.164 ms, 1.02% CPU
Param -o 5, 1030.65 iops, 8.05 MB/sec, 4.840 ms, 0.86% CPU
Param -o 6, 1191.29 iops, 9.31 MB/sec, 5.030 ms, 1.33% CPU
Param -o 7, 1357.42 iops, 10.60 MB/sec, 5.152 ms, 1.13% CPU
Param -o 8, 1674.22 iops, 13.08 MB/sec, 4.778 ms, 2.07% CPU
Param -o 9, 1895.25 iops, 14.81 MB/sec, 4.745 ms, 1.60% CPU
Param -o 10, 2097.54 iops, 16.39 MB/sec, 4.768 ms, 1.95% CPU
Param -o 11, 2014.49 iops, 15.74 MB/sec, 5.467 ms, 2.03% CPU
Param -o 12, 1981.64 iops, 15.48 MB/sec, 6.055 ms, 1.84% CPU
Param -o 13, 2000.11 iops, 15.63 MB/sec, 6.517 ms, 1.72% CPU
Param -o 14, 1968.79 iops, 15.38 MB/sec, 7.113 ms, 1.79% CPU
Param -o 15, 1970.69 iops, 15.40 MB/sec, 7.646 ms, 2.34% CPU
Param -o 16, 1983.77 iops, 15.50 MB/sec, 8.069 ms, 1.80% CPU
Param -o 17, 1976.84 iops, 15.44 MB/sec, 8.599 ms, 1.56% CPU
Param -o 18, 1982.57 iops, 15.49 MB/sec, 9.049 ms, 2.11% CPU
Param -o 19, 1993.13 iops, 15.57 MB/sec, 9.577 ms, 2.30% CPU
Param -o 20, 1967.71 iops, 15.37 MB/sec, 10.121 ms, 2.30% CPU
Param -o 21, 1964.76 iops, 15.35 MB/sec, 10.699 ms, 1.29% CPU
Param -o 22, 1984.55 iops, 15.50 MB/sec, 11.099 ms, 1.76% CPU
Param -o 23, 1965.34 iops, 15.35 MB/sec, 11.658 ms, 1.37% CPU
Param -o 24, 1983.87 iops, 15.50 MB/sec, 12.161 ms, 1.48% CPU

As you can see, for small IOs, we got consistently better performance as we increased the queue depth for the first few runs. After a certain number of outstanding IOs, adding more started giving us very little improvement until things flatten out completely. As we kept adding more queue depth, all we had was more latency with no additional benefit in IOPS or throughput. If you have a better storage subsystem, you might need to try even higher queue depths. If you don’t hit an IOPS plateau with increasing average latency, you did not queue enough IO to fully exploit the capabilities of your storage subsystem.

So, in this setup, we seem to reach a limit at around 10 outstanding IOs and latency starts to ramp up more dramatically after that. Let’s see the full output for queue depth of 10 to get a good sense:

PS C:\DiskSpd> C:\DiskSpd\DiskSpd.exe -c1000G -d10 -w0 -r -b8k -o10 -t1 -h -L X:\testfile.dat

Command Line: C:\DiskSpd\DiskSpd.exe -c1000G -d10 -w0 -r -b8k -o10 -t1 -h -L X:\testfile.dat

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: 'X:\testfile.dat'
                think time: 0ms
                burst size: 0
                software and hardware cache disabled
                performing read test
                block size: 8192
                using random I/O (alignment: 8192)
                number of outstanding I/O operations: 10
                stride size: 8192
                thread stride size: 0
                threads per file: 1
                using I/O Completion Ports
                IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:       10.01s
thread count:           1
proc count:             4

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|   8.58%|   1.09%|    7.49%|  91.45%
   1|   0.00%|   0.00%|    0.00%| 100.03%
   2|   0.00%|   0.00%|    0.00%|  99.88%
   3|   0.00%|   0.00%|    0.00%| 100.03%
-------------------------------------------
avg.|   2.15%|   0.27%|    1.87%|  97.85%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       160145408 |        19549 |      15.25 |    1952.47 |    5.125 |     8.135 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:         160145408 |        19549 |      15.25 |    1952.47 |    5.125 |     8.135

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       160145408 |        19549 |      15.25 |    1952.47 |    5.125 |     8.135 | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:         160145408 |        19549 |      15.25 |    1952.47 |    5.125 |     8.135

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | X:\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      3.101 |        N/A |      3.101
   25th |      3.961 |        N/A |      3.961
   50th |      4.223 |        N/A |      4.223
   75th |      4.665 |        N/A |      4.665
   90th |      5.405 |        N/A |      5.405
   95th |      6.681 |        N/A |      6.681
   99th |     21.494 |        N/A |     21.494
3-nines |    123.648 |        N/A |    123.648
4-nines |    335.632 |        N/A |    335.632
5-nines |    454.760 |        N/A |    454.760
6-nines |    454.760 |        N/A |    454.760
7-nines |    454.760 |        N/A |    454.760
8-nines |    454.760 |        N/A |    454.760
    max |    454.760 |        N/A |    454.760
 

Note that there is some variability here. This second run with the same parameters (1 thread, 10 outstanding IOs) yielded slightly fewer IOPS. You can reduce this variability by running with longer duration or averaging multiple runs. More on that later.

With this system, we don’t seem to have a CPU bottleneck. The overall CPU utilization is around 2% and the busiest core is under 9% of usage. This system has 4 cores and anything with less than 25% (1/4) overall CPU utilization is probably not an issue. In other configurations, you might run into CPU core bottlenecks, though… 

 

8. Tune queue depth for small random IOs, part 2

 

Now let’s perform the same tuning exercise for small random IOS with a system with better storage performance and less capable cores. For this one, we’ll continue to use the same PowerShell script. However, this is on system using an SSD for storage and 8 slower CPU cores. Here’s that same script again:

1..16 | % { 
   $param = "-o $_"
   $result = C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k $param -t1 -h -L C:\testfile.dat
   foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
   foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
   $mbps = $total.Split("|")[2].Trim()
   $iops = $total.Split("|")[3].Trim()
   $latency = $total.Split("|")[4].Trim()
   $cpu = $avg.Split("|")[1].Trim()  
   "Param $param, $iops iops, $mbps MB/sec, $latency ms, $cpu CPU"
}

Here’s the sample output from our second system:

Param -o 1, 7873.26 iops, 61.51 MB/sec, 0.126 ms, 3.96% CPU
Param -o 2, 14572.54 iops, 113.85 MB/sec, 0.128 ms, 7.25% CPU
Param -o 3, 23407.31 iops, 182.87 MB/sec, 0.128 ms, 6.76% CPU
Param -o 4, 31472.32 iops, 245.88 MB/sec, 0.127 ms, 19.02% CPU
Param -o 5, 32823.29 iops, 256.43 MB/sec, 0.152 ms, 20.02% CPU
Param -o 6, 33143.49 iops, 258.93 MB/sec, 0.181 ms, 20.71% CPU
Param -o 7, 33335.89 iops, 260.44 MB/sec, 0.210 ms, 20.13% CPU
Param -o 8, 33160.54 iops, 259.07 MB/sec, 0.241 ms, 21.28% CPU
Param -o 9, 36047.10 iops, 281.62 MB/sec, 0.249 ms, 20.86% CPU
Param -o 10, 33197.41 iops, 259.35 MB/sec, 0.301 ms, 20.49% CPU
Param -o 11, 35876.95 iops, 280.29 MB/sec, 0.306 ms, 22.36% CPU
Param -o 12, 32955.10 iops, 257.46 MB/sec, 0.361 ms, 20.41% CPU
Param -o 13, 33548.76 iops, 262.10 MB/sec, 0.367 ms, 20.92% CPU
Param -o 14, 34728.42 iops, 271.32 MB/sec, 0.400 ms, 24.65% CPU
Param -o 15, 32857.67 iops, 256.70 MB/sec, 0.456 ms, 22.07% CPU
Param -o 16, 33026.79 iops, 258.02 MB/sec, 0.484 ms, 21.51% CPU

As you can see, this SSD can deliver many more IOPS than the previous system which used multiple HDDs. We got consistently better performance as we increased the queue depth for the first few runs. As usual, after a certain number of outstanding IOs, adding more started giving us very little improvement until things flatten out completely and all we do is increase latency. This is coming from a single SSD. If you have multiple SSDs in Storage Spaces Pool or a RAID set, you might need to try even higher queue depths. Always make sure you increase –o parameter to reach the point where IOPS hit a peak and only latency increases.

So, in this setup, we seem to start losing steam at around 6 outstanding IOs and latency starts to ramp up more dramatically after queue depth reaches 8. Let’s see the full output for queue depth of 8 to get a good sense:

PS C:\> C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k -o8 -t1 -h -L C:\testfile.dat

Command Line: C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k -o8 -t1 -h -L C:\testfile.dat

Input parameters:

    timespan:   1
    -------------
    duration: 10s
    warm up time: 5s
    cool down time: 0s
    measuring latency
    random seed: 0
    path: 'C:\testfile.dat'
        think time: 0ms
        burst size: 0
        software and hardware cache disabled
        performing read test
        block size: 8192
        using random I/O (alignment: 8192)
        number of outstanding I/O operations: 8
        stride size: 8192
        thread stride size: 0
        threads per file: 1
        using I/O Completion Ports
        IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:    10.00s
thread count:        1
proc count:        8

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  99.06%|   2.97%|   96.09%|   0.94%
   1|   5.16%|   0.62%|    4.53%|  94.84%
   2|  14.53%|   2.81%|   11.72%|  85.47%
   3|  17.97%|   6.41%|   11.56%|  82.03%
   4|  24.06%|   5.16%|   18.91%|  75.94%
   5|   8.28%|   1.56%|    6.72%|  91.72%
   6|  16.09%|   3.91%|   12.19%|  83.90%
   7|   8.91%|   0.94%|    7.97%|  91.09%
-------------------------------------------
avg.|  24.26%|   3.05%|   21.21%|  75.74%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      2928967680 |       357540 |     279.32 |   35753.26 |    0.223 |     0.051 | C:\testfile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:        2928967680 |       357540 |     279.32 |   35753.26 |    0.223 |     0.051

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      2928967680 |       357540 |     279.32 |   35753.26 |    0.223 |     0.051 | C:\testfile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:        2928967680 |       357540 |     279.32 |   35753.26 |    0.223 |     0.051

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | C:\testfile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.114 |        N/A |      0.114
   25th |      0.209 |        N/A |      0.209
   50th |      0.215 |        N/A |      0.215
   75th |      0.224 |        N/A |      0.224
   90th |      0.245 |        N/A |      0.245
   95th |      0.268 |        N/A |      0.268
   99th |      0.388 |        N/A |      0.388
3-nines |      0.509 |        N/A |      0.509
4-nines |      2.905 |        N/A |      2.905
5-nines |      3.017 |        N/A |      3.017
6-nines |      3.048 |        N/A |      3.048
7-nines |      3.048 |        N/A |      3.048
8-nines |      3.048 |        N/A |      3.048
    max |      3.048 |        N/A |      3.048

 

Again you note that there is some variability here. This second run with the same parameters (1 thread, 8 outstanding IOs) yielded a few more IOPS. We’ll later cover some tips on how to average out multiple runs.

You can also see that apparently one of the CPU cores is being hit harder than others. There is clearly a potential bottleneck. Let’s look into that…

 

9. Tune threads for small random IOs with CPU bottleneck

 

In this 8-core system, any overall utilization above 12.5% (1/8 of the total) means a potential core bottleneck when using a single thread. You can actually see in the CPU table in our last run that our core 0 is pegged at 99%. We should be able to do better with multiple threads. Let’s try increasing the number of threads with a matching reduction of queue depth so we end up with the same number of total outstanding IOs.

$o = 8
$t = 1
While ($o -ge 1) { 
   $paramo = "-o $o"
   $paramt = “-t $t”
   $result = C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k $paramo $paramt -h -L C:\testfile.dat
   foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
   foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
   $mbps = $total.Split("|")[2].Trim()
   $iops = $total.Split("|")[3].Trim()
   $latency = $total.Split("|")[4].Trim()
   $cpu = $avg.Split("|")[1].Trim()
   “Param $paramo $paramt, $iops iops, $mbps MB/sec, $latency ms, $cpu CPU"
   $o = $o / 2
   $t = $t * 2
}

Here’s the output:

Param -o 8 -t 1, 35558.31 iops, 277.80 MB/sec, 0.225 ms, 22.36% CPU
Param -o 4 -t 2, 37069.15 iops, 289.60 MB/sec, 0.215 ms, 25.23% CPU
Param -o 2 -t 4, 34592.04 iops, 270.25 MB/sec, 0.231 ms, 27.99% CPU
Param -o 1 -t 8, 34621.47 iops, 270.48 MB/sec, 0.230 ms, 26.76% CPU

As you can see, in my system, adding a second thread improved things a bit, reaching our best yet 37,000 IOPS without much of a change in latency. It seems like we were a bit limited by the performance of a single core. We call that being “core bound”. See below the full output for the run with two threads:

PS C:\> C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k -o4 -t2 -h -L C:\testfile.dat

Command Line: C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k -o4 -t2 -h -L C:\testfile.dat

Input parameters:

    timespan:   1
    -------------
    duration: 10s
    warm up time: 5s
    cool down time: 0s
    measuring latency
    random seed: 0
    path: 'C:\testfile.dat'
        think time: 0ms
        burst size: 0
        software and hardware cache disabled
        performing read test
        block size: 8192
        using random I/O (alignment: 8192)
        number of outstanding I/O operations: 4
        stride size: 8192
        thread stride size: 0
        threads per file: 2
        using I/O Completion Ports
        IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:    10.00s
thread count:        2
proc count:        8

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  62.19%|   1.87%|   60.31%|  37.81%
   1|  62.34%|   1.87%|   60.47%|  37.66%
   2|  11.41%|   0.78%|   10.62%|  88.75%
   3|  26.25%|   0.00%|   26.25%|  73.75%
   4|   8.59%|   0.47%|    8.12%|  91.56%
   5|  16.25%|   0.00%|   16.25%|  83.75%
   6|   7.50%|   0.47%|    7.03%|  92.50%
   7|   3.28%|   0.47%|    2.81%|  96.72%
-------------------------------------------
avg.|  24.73%|   0.74%|   23.98%|  75.31%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      1519640576 |       185503 |     144.92 |   18549.78 |    0.215 |     0.419 | C:\testfile.dat (1024MB)
     1 |      1520156672 |       185566 |     144.97 |   18556.08 |    0.215 |     0.404 | C:\testfile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:        3039797248 |       371069 |     289.89 |   37105.87 |    0.215 |     0.411

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      1519640576 |       185503 |     144.92 |   18549.78 |    0.215 |     0.419 | C:\testfile.dat (1024MB)
     1 |      1520156672 |       185566 |     144.97 |   18556.08 |    0.215 |     0.404 | C:\testfile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:        3039797248 |       371069 |     289.89 |   37105.87 |    0.215 |     0.411

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | C:\testfile.dat (1024MB)
     1 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | C:\testfile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.088 |        N/A |      0.088
   25th |      0.208 |        N/A |      0.208
   50th |      0.210 |        N/A |      0.210
   75th |      0.213 |        N/A |      0.213
   90th |      0.219 |        N/A |      0.219
   95th |      0.231 |        N/A |      0.231
   99th |      0.359 |        N/A |      0.359
3-nines |      0.511 |        N/A |      0.511
4-nines |      1.731 |        N/A |      1.731
5-nines |     80.959 |        N/A |     80.959
6-nines |     90.252 |        N/A |     90.252
7-nines |     90.252 |        N/A |     90.252
8-nines |     90.252 |        N/A |     90.252
    max |     90.252 |        N/A |     90.252

You can see now that cores 0 and 1 are being used, with both at around 62% utilization. So we have effectively eliminated the core bottleneck that we had before.

For systems with more capable storage, it’s easier to get “core bound” and adding more threads can make a much more significant difference. As I mentioned, it’s important to keep an eye on the per-core CPU utilization (not only the total CPU utilization) to look out for these bottlenecks.

 

10. Multiple runs are better than one

 

One thing you might have notice with DiskSpd (or any other tools like it) is that the results are not always the same given the same parameters. Each run is a little different. For instance, let’s try running our “-b8K –o4 -t2” with the very same parameters a few times to see what happens:

1..8 | % { 
   $result = C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k -o4 -t2 -h -L C:\testfile.dat
   foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
   foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
   $mbps = $total.Split("|")[2].Trim()
   $iops = $total.Split("|")[3].Trim()
   $latency = $total.Split("|")[4].Trim()
   $cpu = $avg.Split("|")[1].Trim()
   “Run $_, $iops iops, $mbps MB/sec, $latency ms, $cpu CPU"
}

Here are the results:

Run 1, 34371.97 iops, 268.53 MB/sec, 0.232 ms, 24.53% CPU
Run 2, 37138.29 iops, 290.14 MB/sec, 0.215 ms, 26.72% CPU
Run 3, 36920.81 iops, 288.44 MB/sec, 0.216 ms, 26.66% CPU
Run 4, 34538.00 iops, 269.83 MB/sec, 0.231 ms, 36.85% CPU
Run 5, 34406.91 iops, 268.80 MB/sec, 0.232 ms, 37.09% CPU
Run 6, 34393.72 iops, 268.70 MB/sec, 0.214 ms, 33.71% CPU
Run 7, 34451.48 iops, 269.15 MB/sec, 0.232 ms, 25.74% CPU
Run 8, 36964.47 iops, 288.78 MB/sec, 0.216 ms, 30.21% CPU

The results have a good amount of variability. You can look at the standard deviations by specifying the -D option to check how stable things are. But, in the end, how can you tell which measurements are the most accurate? Ideally, once you settle on a specific set of parameters, you should run DiskSpd a few times and average out the results. Here’s a sample PowerShell script to do it, using the last set of parameters we used for the 8KB IOs:

$tiops=0
$tmbps=0
$tlatency=0
$tcpu=0
$truns=10
1..$truns | % {
   $result = C:\DiskSpd\DiskSpd.exe -c1G -d10 -w0 -r -b8k -o4 -t2 -h -L C:\testfile.dat
   foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
   foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
   $mbps = $total.Split("|")[2].Trim()
   $iops = $total.Split("|")[3].Trim()
   $latency = $total.Split("|")[4].Trim()
   $cpu = $avg.Split("|")[1].Trim()
   “Run $_, $iops iops, $mbps MB/sec, $latency ms, $cpu CPU"
   $tiops += $iops
   $tmbps += $mbps
   $tlatency += $latency
   $tcpu  += $cpu.Replace("%","")
}
$aiops = $tiops / $truns
$ambps = $tmbps / $truns
$alatency = $tlatency / $truns
$acpu = $tcpu / $truns
“Average, $aiops iops, $ambps MB/sec, $alatency ms, $acpu % CPU"

The script essentially runs DiskSpd 10 times, totaling the numbers for IOPs, throughput, latency and CPU usage, so it can show an average at the end. The $truns variable represents the total number of runs desired. Variables starting with $t hold the totals. Variables starting with $a hold averages. Here’s a sample output:

Run 1, 37118.31 iops, 289.99 MB/sec, 0.215 ms, 35.78% CPU
Run 2, 34311.40 iops, 268.06 MB/sec, 0.232 ms, 38.67% CPU
Run 3, 36997.76 iops, 289.04 MB/sec, 0.215 ms, 38.90% CPU
Run 4, 34463.16 iops, 269.24 MB/sec, 0.232 ms, 24.16% CPU
Run 5, 37066.41 iops, 289.58 MB/sec, 0.215 ms, 25.14% CPU
Run 6, 37134.21 iops, 290.11 MB/sec, 0.215 ms, 26.02% CPU
Run 7, 34430.21 iops, 268.99 MB/sec, 0.232 ms, 23.61% CPU
Run 8, 35924.20 iops, 280.66 MB/sec, 0.222 ms, 25.21% CPU
Run 9, 33387.45 iops, 260.84 MB/sec, 0.239 ms, 21.64% CPU
Run 10, 36789.85 iops, 287.42 MB/sec, 0.217 ms, 25.86% CPU
Average, 35762.296 iops, 279.393 MB/sec, 0.2234 ms, 28.499 % CPU

As you can see, it’s a good idea to capture multiple runs. You might also want to run each iteration for a longer time, like 60 seconds instead of just 10 second.

Using 10 runs of 60 seconds (10 minutes total) might seem a little excessive, but that was the minimum recommended by one of our storage performance engineers. The problem with shorter runs is that they often don’t give the IO subsystem time to stabilize. This is particularly true when testing virtual file systems (such as those in cloud storage or virtual machines) when files are allocated dynamically. Also, SSDs exhibit write degradation and can sometimes take hours to reach a steady state (depending on how full the SSD is). So it's a good idea to run the test for a few hours in these configurations on a brand new system, since this could drop your initial IOPs number by 30% or more.

  

11. DiskSpd and SMB file shares

 

You can use DiskSpd to get the same type of performance information for SMB file shares. All you have to do is run DiskSpd from an SMB client with access to a file share.

It is as simple as mapping the file share to a drive letter using the old “NET USE” command or the new PowerShell cmdlet “New-SmbMapping”. You can also use a UNC path directly in the command line, instead of using drive letters.

Here are an example using the HDD-based system we used as our first few examples, now running remotely:

PS C:\diskspd> C:\DiskSpd\DiskSpd.exe -c1000G -d10 -w0 -r -b8k -o10 -t1 -h -L \\jose1011-st1\Share1\testfile.dat

Command Line: C:\DiskSpd\DiskSpd.exe -c1000G -d10 -w0 -r -b8k -o10 -t1 -h -L \\jose1011-st1\Share1\testfile.dat

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: '\\jose1011-st1\Share1\testfile.dat'
                think time: 0ms
                burst size: 0
                software and hardware cache disabled
                performing read test
                block size: 8192
                using random I/O (alignment: 8192)
                number of outstanding I/O operations: 10
                stride size: 8192
                thread stride size: 0
                threads per file: 1
                using I/O Completion Ports
                IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:       10.01s
thread count:           1
proc count:             4

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  12.96%|   0.62%|   12.34%|  86.98%
   1|   0.00%|   0.00%|    0.00%|  99.94%
   2|   0.00%|   0.00%|    0.00%|  99.94%
   3|   0.00%|   0.00%|    0.00%|  99.94%
-------------------------------------------
avg.|   3.24%|   0.16%|    3.08%|  96.70%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       158466048 |        19344 |      15.10 |    1933.25 |    5.170 |     6.145 | \\jose1011-st1\Share1\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:         158466048 |        19344 |      15.10 |    1933.25 |    5.170 |     6.145

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       158466048 |        19344 |      15.10 |    1933.25 |    5.170 |     6.145 | \\jose1011-st1\Share1\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:         158466048 |        19344 |      15.10 |    1933.25 |    5.170 |     6.145

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | \\jose1011-st1\Share1\testfile.dat (1000GB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      3.860 |        N/A |      3.860
   25th |      4.385 |        N/A |      4.385
   50th |      4.646 |        N/A |      4.646
   75th |      5.052 |        N/A |      5.052
   90th |      5.640 |        N/A |      5.640
   95th |      6.243 |        N/A |      6.243
   99th |     12.413 |        N/A |     12.413
3-nines |     63.972 |        N/A |     63.972
4-nines |    356.710 |        N/A |    356.710
5-nines |    436.406 |        N/A |    436.406
6-nines |    436.406 |        N/A |    436.406
7-nines |    436.406 |        N/A |    436.406
8-nines |    436.406 |        N/A |    436.406
    max |    436.406 |        N/A |    436.406

This is an HDD-based storage system, so most of the latency comes from the local disk, not the remote SMB access. In fact, we achieved numbers similar to what we had locally before.

 

12. Conclusion

 

I hope you have learned how to use DiskSpd to perform some storage testing of your own. I encourage you to use it to look at the performance of the storage features in Windows Server 2012, Windows Server 2012 R2 and Windows Server Technical Preview. That includes Storage Spaces, SMB3 shares, Scale-Out File Server, Storage Replica and Storage QoS. Let me know if you were able to try it out and feel free to share some of your experiments via blog comments.

 

Thanks to Bartosz Nyczkowski, Dan Lovinger, David Berg and Scott Lee for their contributions to this blog post.


100 Technical Things Non-Technical People Can Learn To Make Their Lives Easier

$
0
0

My lovely wife has an MBA, speaks 5 languages, and is currently in school to get a third (fourth?) degree. Point is, she's smarter than me (I? See?) and I'm lucky she even speaks to me.

It seems to run on some sort of electricity

However, she's in a class right now and wanted to record the hour long lectures. After trying on her Windows Phone, her iPad, her Laptop with OneNote she got very frustrated. I gave her a portable handheld recorder and she returned from class with 800 megs of wav files then asked me how to share them with the rest of the class.

This started a long talk about WAV files vs. mp3s, DrppBox vs. email, megabytes vs. gigabytes and developing a sense of "digital scale." We talked about pixels and dpi, about 40 megapixel vs a 400x400 picture.

She said to me "there's all this stuff that techies know that makes normals feel bad. Where does one learn all this?"

  • Why does this picture print out blurry?
  • Why is that file too big to email?
  • I deleted Angry Birds but my computer is still slow!

"Knowing computers" today is more than just knowing Office, or knowing how to attach a file. Today's connected world is way more complex than any of us realize. If you're a techie, you're very likely forgetting how far you've come!

The #1 thing you can do when working with a non-techie is to be empathetic. Put yourself in their shoes. Give them the tools and the base of knowledge they need.

I honestly don't know HOW we learn these things. But, I figured I could help. If you've ever answered questions like this from your non-technical-partner or relative, then here's a list of

100 Technical Things Non-Technical People Can Learn To Make Their Lives Easier

Ok, perhaps not 100 exactly. I will add more good tips if you suggest them in the comments!

Size

  • A gigabyte is big. It's not something that is easily emailed.

    • A gigabyte might be a whole movie! If you want to get a gigabyte to someone you could either compress/squish it with some software and send a smaller version of the file, or put it on a USB drive and snail mail (post) it.

  • One to five megabytes are reasonable sizes. You can have pictures this size, documents, and small videos.

  • MB means Megabyte. GB means Gigabyte. Note that the abbreviation 1GB means 1000MB, so always double check and look closely.

  • Backup everything. Is your entire company on your 10 year old computer’s desktop? Look for Backup options like CrashPlan, DropBox, OneDrive, etc. Literally ANYTHING is better than leaving documents on your computer’s desktop.

Files

  • Think about where your files are. Are they in a folder on your Desktop? Are they in a folder called My Documents? Keep your files collected in one location (and below) so that you can easily make backups.

  • Learn to use search to find your files. Press the Windows key and just start typing on Windows, or use Spotlight (Command-Spacebar) on Mac.

  • Don’t forget to hover over things and right-mouse-click on things. It may not be initially intuitive, but right clicking often answers all your questions.

  • If you double click a file and it doesn’t do what you want, in Windows, right click the file, choose Open With, then Choose Default Program to pick a new program.

Privacy

Email

  • Assume that your email isn't private.

  • Don’t try to email more than 10 megabytes. Or 5 even. Many of your recipients won’t get the files. They will “bounce back.”

  • Don’t CC more than 10 of your friends or neighbors. At that point, consider another way to talk to them. Some of your friends may not want their email given to the world. Perhaps this is an BCC situation?

  • Always think twice when replying. Did you want to Reply To All?

  • It’s a good idea to check for hoaxes before forwarding bulk emails. For instance, look at snopes.com if you think Bill Gates may send you money.

  • If you get an email out of the blue that’s telling you to click on a link to “verify”
    your password, credit card, or other information, it’s a good idea not to click on the link. Instead, open a browser and navigate to your account on the site in question.

  • Never ever send your private credit card number, social security number, or anything personal in email. Ever. Really. Never.

Searching

  • If you put your search term - or parts of it - in quotes, you’ll get more specific results. For instance, “mark hamill” “star wars” would probably get better results than mark hamill star wars.

  • Your search term should sound like the answer you’re looking for rather than the question. So search for “2000 academy award winner” instead of “who was that guy who won that film award in 2000”?

  • If you want to google within a single site, try “site:thatsite.com mysearch” to search ONLY thatsite.com.

  • If you get an error message or code when a program stops working, just search for that number, like “0x8000abcd”

  • Be LESS specific. Every new word you add is narrowing your results!

  • You can search for the original source of a picture using Google Image search and uploading the image. It will find other places on the internet that picture lives!

  • If you don’t want someone to know that you’re searching for something that’s either secret or naughty, use your browser’s “Incognito Mode” or “Private Browsing Mode.” Note that while this may hide your browsing from a specific computer, that computer still has to talk to other computers to talk to the internet. Incognito Mode won’t hide your surfing from your boss.

Sound Files

  • MP3s are squished audio files. Remember the rules of thumb around file sizes when emailing.

  • WAV files are big audio files. You can use a program like Audacity to take a uncompressed WAV and “Save As” it into a Compressed MP3.

Documents

  • PDFs are Portable Documents. They are made by Adobe and work pretty much everywhere. This is a good format for Resumes. You can often Save As your document and create a PDF. Also, note that PDFs are almost always considered read-only.

  • Word has doc files and newer docx files. When working with a group, select a format that is common to everyone’s version of Word. Some folks may have old versions!

  • Big documents are hard to move around the internet. Rather than emailing that giant document, instead put it in a shared location like Dropbox, OneDrive or Google Drive, then using the Sharing feature of your chosen service, email a LINK to the document.

  • Collaborating with others by e-mailing documents around doesn’t work very well. If you’re sharing a document as recommended above, you can take advantage of their realtime collaboration features.

  • Don’t use a document when you need something bigger. Your small business’ records should probably go in a database rather than an Excel file.

  • In Windows, files end with a file extension like “.docx”, “.mp3” or “.jpg” that determines what program it’s associated with. If you save a file with the wrong extension, it might open with another program or not at all.

  • Sometimes you can’t see file extensions. In Windows Explorer, in the View Menu, pick File Extension to show them if they’re hidden.

Scanning and Faxing

  • An easy way to scan documents without a scanner is to use a scanning app on your phone. This means you’ll take a picture of the document. There are apps that can make your camera like a scanner.

  • If you need to fax but don’t have a fax machine, there are apps online that can take a photo from your phone and fax it. You can also received faxes as photos or PDFs.

USB Keys

  • Never put a random USB key in your computer. You have no idea where it’s been.

  • USB keys can do all kinds of things to your computer the second you plug it in without your even opening a file. Only use trusted USB keys.

My computer is slow

  • Think about what specifically is slow. Often the thing that’s slow is your internet. Are you on wireless? Is the signal weak?

  • Running a lot of programs at once can slow things down.

  • When your hard drive is almost completely full, your computer can slow down. Watch for warnings!

  • The cheapest, simplest way to speed up a slow computer is usually by adding more RAM (memory).

Bandwidth

  • Not everyone has super fast internet. Some people have a quota for the month. For example, my brother can download only 5 gigabytes (remember that’s 5,000 megabytes) every month. I avoid sending him big files and YouTube links.

Pictures

  • The funny pictures you find on the internet are usually small in “dimension” - they have a small number of total dots or “pixels.”

    • A picture that is 400x400 in pixel dimension will look really blurry when it’s printed out on a full piece of paper.

    • For a photo to look nice when printed, it should ideally have more than 200 dots per inch. So for a 4 inch by 6 in photo, you’ll want a picture that’s at LEAST 800x1200, and even larger is better.

    • Megapixels are not megabytes. One megapixel is one million pixels. A “3.1 megapixel” camera will actually make a photoi that is 2048x1536 in dimension. This is nice size for printing in small sizes! A photo like this will be about one megabyte and suitable for emailing.

  • Photos matter. Back them all up.

Security and The Evil Internet

  • Most of the internet is out to get you. If a website looks wrong, it’s likely not somewhere you want to me. The more ads and popups the worse the neighborhood.

  • If you go looking for things you shouldn’t, like bootleg movies, you’ll be more likely to end up in a bad part of the internet.

  • Bad parts of the Internet will always try to trick you.

  • Be aware of advertisements that are actually pictures of download buttons. These download buttons might literally be next to the actual download button you need to press.

  • Always think three times before clicking on a link that’s been emailed to you. If you have to install something or a message tells you that your computer is missing something, it may be a trick.

  • Consider if a web site’s domain ends with a far away country code you weren’t expecting. Did I mean to be looking for a link in China (.cn) or Kazakhstan (.kz)?

  • Microsoft and Apple will never call your house to tell you personally that you have a virus.

  • Consider turning on “Two Factor Authentication.” That means that in addition to your password you’ll also need your phone with you to login in. That might sound like a hassle, but it stops the bad guys in their tracks.

"Space" - Disks and Memory

  • Memory is like the top of your desk - it’s what you’re working on right now. Disk space is like your filing cabinet, where you store things for later. When you turn off your computer, your memory is cleared but your hard drive isn’t.

  • If you computer is “just slow” there could be a few things going on. Are a lot of things running right now this moment? Close running programs, just like taking things off your desk and clear your mind.

  • Rarely will uninstalling applications “free up space.” If you computer is filling up, it’s likely with photos, videos or movies. Uninstalling Angry Birds from either your computer or phone likely won’t free up the large amounts of space you want.

Pictures

  • JPGs are image files that are great for photos. They are squished pictures, and the compression is optimized for pictures of people and nature.

  • PNGs are image files that are great for diagrams and screenshots.

  • Learn how to take screenshots.

    • Press the PrintScreen key to put a screenshot of the current screen in your clipboard on a Windows PC.

    • On a Windows 8 machine, press the Windows Key and the PrintScreen key to capture the screen to a Folder in your Pictures folder called Screenshots.

    • Press Command-Shift-3 to capture your screen to the desktop on ac Mac

  • Resizing images can be hard and frustrating. On Windows, try the Image Resizer Utility to make large images smaller.

Surfing and Links

  • Often you’ll search for something on a site, then end up on a page called “searchresults.asp.” You’ll want to share that link with your friend so you copy paste it and send them somesite.com/searchresults.asp. But you need to look at that URL (URL is a link). Does the link contain the thing you searched for? If not, your friend won’t see anything. Look for links like somesite.com/searchresults.asp?q=baby%20groot%20doll when emailing.

  • Search results often have a “share” link that will either get you a good sharing link or send via e-mail for you.

  • Always check for the lock in your browser address bar when you’re about to enter your password. Are you where you think you are? Does the address bar look correct? Is it green? The green address bar gives you more information about the company you’re talking to.

  • Is your password “Password”? Consider getting a password manager like 1Password or LastPass. Don’t put your password on a post-it note on your monitor. Try not to reuse passwords between sites. Don’t share your password with others.

  • Don’t reuse your passwords. If you give a tech support person a password and it's also the password you use for your bank, it’s like giving a parking attendant the keys to your house!

Big thanks to Jon Galloway for his help with this list!

tech_support_cheat_sheet

What did we miss?


Sponsor: Thanks for my friends at Octopus Deploy for sponsoring the feed this week. Their product is fantastic. Using NuGet and powerful conventions, Octopus Deploy makes it easy to automate releases ofASP.NET applications and Windows Services. Say goodbye to remote desktop and start automating today!



© 2014 Scott Hanselman. All rights reserved.
     

Windows 10 Preview available for review

$
0
0

Good morning AskPerf!  It’s been a while since our last post, and we apologize for that.  We’ve been quite busy here on the Support side knocking out customer issues…

Any who, we have some upcoming blogs in the oven that need a little more time to bake.  One of which is a short series on Windows Event Forwarding which I am very excited about.  Look for that to come out in the coming months.

Even though we are commonly known as the Performance team, internally we are known as the Reliability team.  Some of the technologies we support are as follows:

Windows Client/Server OS

  • Printing
  • RDS / TS
  • Performance which includes System Hangs, High CPU, Memory issues, etc.
  • Base WMI functionality
  • COM/DCOM – base functionality
  • Explorer (Shell)
  • Desktop Search
  • MUI and IME
  • MSI – basic functionality
  • Themes/Fonts/Screen Savers/Wallpaper
  • Task Scheduler
  • WinRM – basic functionality
  • Windows PowerShell – install and basic functionality
  • ACT

There are many other smaller technologies, but these are the main ones.

Now back to our original topic:  The Windows 10 Preview is available for download/testing.  To get it, click the following link:

Windows 10 Preview

Finally, we always welcome feedback on topics you would like for us to blog about here on the AskPerf blog site.

-Blake

MSRT October 2014 – Hikiti

$
0
0

The October release of the Malicious Software Removal Tool (MSRT) is directly related to a Coordinated Malware Eradication (CME) initiative led by Novetta and with the help of many other security partners: F-Secure, ThreatConnect, ThreatTrack Security, Volexity, Symantec, Tenable, Cisco, and iSIGHT. Collaboration across private industry is crucial to addressing advanced persistent threats.

The target in this campaign is an advanced persistent threat that served as the infrastructure of actors that launched targeted attacks against multiple organizations around the world.  This month, the MSRT along with all of the partners in our Virus Information Alliance program are releasing new coverage for this infrastructure: Win32/Hikiti and some of the related malware families, Win32/Mdmbot, Win32/Moudoor, Win32/Plugx, Win32/Sensode, and Win32/Derusbi.

Novetta has released an executive summary on this threat, which contains the initial findings of the impact of these families. It will be followed up with a more detailed report in a few  weeks as our partners in this CME campaign work together to assess the overall impact of the operation.

A bit of history about Hikiti. We first detected the Hikiti family in 2012. The name Hikiti is associated with the Hikit string usually found as a part of a PDB file:

Hikit within a PDB 

Figure 1: Hikiti is associated with the Hikit string usually found in a PDB file

Hikiti is usually installed after a machine has been compromised through an exploit. For instance, we’ve seen the vulnerability discussed in CVE-2013-3893 being exploited to install Hikiti as a payload.

Once this threat successfully enters a system it can install other malware. In some cases other malware are installed first and then install other members of the group. This can include the following, mostly backdoor malware families:

  • Mdmbot
  • Moudoor
  • Plugx
  • Sensode
  • Derusbi

Similar to these families, Hikiti’s main payload is to act as a backdoor to give a malicious hacker access to download and run remote commands to control the system and steal sensitive information.

Some Hikiti versions drop an encrypted configuration (.conf) that contains the hosts that the malware tries to connect to. The encryption is usually XOR and the key is DWORD. Figure 2 shows an example of this .conf file being decrypted:

A Kitit config file being decrypted 

Figure 2: Hikiti .conf file being decrypted

To help protect yourself from Hikiti and other threats, run up-to-date, real-time security software, such as Microsoft Security Essentials or another trusted security software product. For more information about Hikiti and related threats, be sure to review the executive summary released by Novetta.

In a few weeks, we will follow up with an update on this campaign and provide more details on how it came together, its impact, and what we learned during the process.

If you are interested in working with Microsoft and other trusted security researchers in the industry by participating in a campaign, or, better yet, leading one like Novetta did, please read our CME page to find out more about the program, including how to apply.

Francis Tan Seng & Holly Stewart
MMPC

Distributed Cloud-Based Machine Learning

$
0
0

This post is authored by Dhruv Mahajan, Sundararajan Sellamanickam and Keerthi Selvaraj, Researchers at Microsoft’s Cloud & Information Services Lab (CISL) and at Microsoft Research.

Enterprises of all stripes are amassing huge troves of data assets, e.g. logs pertaining to user behavior, system access, usage patterns and much more. Companies will benefit enormously by using the power of cloud services platforms such as Microsoft Azure not merely to host such data or perform classic “look-in-the-rear-view mirror” BI, but by applying the power and scale of cloud-based predictive analytics. Using modern tools such as Azure Machine Learning, for instance, companies can obtain actionable insights about how the future of their businesses might evolve – insights that can give them a competitive edge.

Gathering and maintaining “big data” is becoming a common need across many applications. As data sizes explode, it becomes necessary to store data in a distributed fashion. In many applications, the collection of data itself is a decentralized process, naturally leading to distributed data storage. In such situations it becomes necessary to build machine learning (ML) solutions over distributed data using distributed computing. Examples of such situations include click-through rate estimation via logistic regression in the online advertising universe, or deep learning solutions applied to huge image or speech training datasets, or log analytics to detect anomalous patterns.

Efficient distributed training of ML solutions on a cluster, therefore, is an important focus area at the Microsoft Cloud & Information Services Lab (CISL, that’s pronounced “sizzle” :-)) to which the authors belong. In this post, we delve a bit into this topic, discuss a few related issues and our recent research where we try to addresses some of the same. Some of the details presented here are rather technical, but we attempt to explain the central ideas in as simple a manner as possible. Anybody interested in doing distributed ML on big data will gain by understanding these ideas, and we look forward to your comments and feedback too.

Choosing the Right Infrastructure

In a recent post, John Langford described the Vowpal Wabbit (VW) system for fast learning, where he briefly touched on distributed learning over terascale datasets. Most ML algorithms being iterative in nature, choosing the right distributed framework to run them is crucial.

Map Reduce and its open source implementation, Hadoop, are popular platforms for distributed data processing. However, they are not well-suited for iterative ML algorithms as each iteration has large overheads – e.g. job scheduling, data transfer and data parsing.

Better alternatives would be to add communication infrastructure such as All Reduce, which is compatible with Hadoop (as in VW), or to employ newer distributed frameworks such as REEF which support efficient iterative computation.

SQM

Current state-of-the-art algorithms for distributed ML such as the one in VW are based on the Statistical Query Model (SQM). In SQM, learning is based on doing some computation on each data point and then accumulating the resulting information over all the data points. As an example, consider linear ML problems where the output is formed by doing a dot product of a feature vector with the vector of weight parameters. This includes important predictive models such as logistic regression, SVMs and least squares fitting. In this case, at each iteration, the overall gradient of the training objective function is computed by summing the gradients associated with individual data points. Each node forms the partial gradient corresponding to the training data present in that node and then an All Reduce operation is used to get the overall gradient.

Communication Bottleneck

Distributed computing often faces a critical bottleneck in the form of a large ratio of computation to communication bandwidth. E.g. it is quite common to see communication being 10x to 50x slower than computation.

Let Tcomm and Tcomp denote the per iteration time for communication and computation respectively. Thus, the overall cost of an iterative ML algorithm can be written as:

Toverall = (Tcomm + Tcomp) * #iterations

Tcomp typically decreases linearly with increasing number of nodes while Tcomm increases or remains constant (in best implementations of All Reduce). ML solutions involving Big Data often have a huge number of weight parameters (d) that must be updated and communicated between the computing nodes of a cluster in each iteration. Moreover, there are other steps like gradient computation in SQM that also require O(d) communication. The situation is even worse in Map Reduce where each iteration requires a separate Map Reduce job. Hence, Tcomm is large when d is large. SQM does not place sufficient emphasis on the inefficiency associated with this.

Overcoming the Communication Bottleneck

Our recent research addresses this important issue. It is based on the following observation: Consider a scenario in which Tcomm, the time for communicating the weight parameters between nodes, is large. In each iteration, what happens with a standard approach such as SQM is that Tcomp, the time associated with computations within each node, is a lot less than Tcomm. So we ask the following question: Is it possible to modify the algorithm and its iterations in such a way that Tcomp is increased to come closer to Tcomm, and, in the process, make the algorithm converge to the desired solution in fewer iterations?

Of course, answering this question is non-trivial since it requires a fundamental algorithmic change.

More Nitty-Gritty Details

Consider the ML problem of learning linear models. In our algorithm, the weight updates and gradients in the nodes are shared in a way similar to the SQM based method. However, at each node, the gradient (computed using All Reduce) and the local data in the node are used in a non-trivial way to form a local approximation of the global problem. Each node solves its approximate problem to form local updates of weight variables. Then the local updates from all nodes are combined together to form a global update of the weight variables. Note that solving the approximate problem leads to increased computation in each node, but it does not require any extra communication. As a result Tcomp increases and, since Tcomm is already high, the per-iteration cost is not affected significantly. However, since each node now is solving the approximate global view of the problem, the number of iterations needed to solve the problem is reduced significantly. Think of a case where the amount of data is so large that the data present within each node is itself sufficient to do good learning. For this case, the approximate problem formed in each node is close to the global problem; the result is that our algorithm requires just one or two iterations while SQM based methods need hundreds or even thousands of iterations. In addition, our approach is flexible and allows a class of approximations rather than a specific one. In general, our algorithm is almost always faster than SQM and, on average, about two to three times faster.

One could also think of distributing the weight vector over many cluster nodes and setting up the distributed data storage and computation in such a way that all updates for any one weight variable happen only in one cluster node. This turns out to be attractive in some situations, for example when one is interested in zeroing out irrelevant weight variables in linear ML problems or for doing distributed deep net training. Here again, we have developed specialized iterative algorithms that do increased computation in each node while decreasing the number of iterations.

Evaluation

We focused above on algorithms suited to communication-heavy situations. But not all problems solved in practice are of this nature. For general situations, there exist a range of good distributed ML algorithms in the recent academic literature. But a careful evaluation of these methods has not been performed. Best methods are finding their way into cloud ML libraries.

Automating Distributed ML to Suit User Needs

There is also another important side to the whole story. Users of distributed ML on the cloud have a variety of needs. They may be interested in minimizing the total solution time, or the cost in dollars associated with the solution. Users may be willing to sacrifice accuracy a bit while optimizing the above mentioned variables. Alternatively, they may be keen to get the best accuracy irrespective of time and cost. Given a problem description, a varied set of such user specifications, and details of the system configuration available for use, it is important to have an automatic procedure for choosing the right algorithm and its parameter settings. Our current research focuses on this aspect.

Automated distributed ML solutions will be one of the important areas/considerations for Azure ML as we evolve our product and expand our offering in the future.

Dhruv, Sundar and Keerthi

Mobile Apps for Web Developers

$
0
0

The path of a mobile app developer often begins with a choice: develop for iOS, Android or Windows? It’s a choice that instantly diminishes the size of your potential audience, but developers often hold their nose and reluctantly make a decision. Those who need to reach all three app stores, choose to rewrite the application for each platform.

Visual Studio enables you to have maximum reach while achieving significant code re-use. With Xamarin, C# developers can share business logic across iOS, Android, and Windows applications. With Apache Cordova, web developers can achieve maximal code re-use by building cross-platform mobile applications using HTML, CSS, and JavaScript.

In this post, we’ll take a close look at how you can use Visual Studio’s extension for Multi-Device Hybrid App Development to build a cross platform app using HTML, JS, and CSS. To follow along in the IDE:

Once you’ve installed the tools, create a project for “Multi-Device Hybrid Apps.”

Create new project in Visual Studio for Mobile Device Hybrid Apps

Access Device Capabilities on any Platform Using the Same JS API

Before we explore the tools, let’s take a moment to look at the architecture of a Cordova app. The application itself is implemented as an HTML application (e.g. Single Page Application) hosted inside a webview control (or on Windows, as a WWA) that gives your app access to native device APIs. Most developers prefer to synchronize data with a server via RESTful web services (e.g. Azure Mobile Services), but all file assets like HTML, CSS, JS, and media are packaged with the application so that users can continue to use the app offline.

To access native device capabilities (e.g. camera, contacts, file system, accelerometer) from JavaScript, Cordova uses a construct called plugins. Plugins typically encapsulate two components: native code to invoke capabilities for each of the three platforms (i.e. Objective-C, Java and C#) and a normalized JavaScript API available for your app to use.

Plugin

To use the API, you make an asynchronous call from within your JavaScript. The native code returns a response to the callback function. In the example below, the camera plugin returns the URI of a photo pointing to the file system on the mobile device.

// Retrieve image file location from the mobile device photo library
function getPhotoURI() {
    navigator.camera.getPhoto(onPhotoSuccess, onPhotoFail, {
        quality: 50,
        destinationType: destinationType.FILE_URI,
        sourceType: pictureSource.PHOTOLIBRARY
    });
}
// Callback from successful Photo Library event
function onPhotoSuccess(imageURI) {
    // Add img to div#album
    var img = document.createElement('img');
    img.setAttribute('src', imageURL);
    document.getElementById('album').appendChild(img);
}

Designed to Converge with Web Standards

Cordova plugins are generally designed to expose JavaScript APIs that will converge with web standards over time. The goal is for the plugins to eventually evaporate leaving the implementations of the W3C standards in their place. For example, the Web API for activating device vibration, navigator.vibrate(time), is already implemented by Cordova, Chrome, and Firefox. Over time all the mobile devices and browsers will use the same API, thereby making plugins obsolete as a polyfill. The ultimate goal is for Cordova to serve as a temporary bridge until the standard web platform supports the device capability.

JavaScript or TypeScript: Your Choice

Once you get started, a large part of your time will be spent writing code. Whether it’s HTML, CSS, JavaScript or TypeScript, we aim to provide our developers with help in contextfor thetask at hand. For example, many developers depend on IntelliSense to avoid common syntax errors and quickly explore new APIs. Would you like to know what native device capabilities are available to your app? Visual Studio’s Tools for Apache Cordova include IntelliSense support for common Cordova plugins using both JavaScript and TypeScript.

IntelliSense support for common Cordova plugins

If you write a custom plugin, you might want to enable IntelliSense for your component as well. To support the common Cordova plugin APIs, we use a JavaScript IntelliSense extension for the JavaScript editor. For TypeScript, we simply wrote TypeScript d.ts files to describe each API. You can see the d.ts files in the public home for open source d.ts files: DefinitelyTyped. Each d.ts file provides the meta-data necessary to provide rock-solid, accurate IntelliSense for Cordova plugins without executing JavaScript code in the background.

Three Ways to Preview Your App

To gain the highest productivity benefit, most developers choose to use the same code - 95% or more - amongst all deployment targets: iOS, Android, and Windows.

Since most developers choose to deploy a single shared HTML/CSS/JS codebase to all platforms, it’s important to be sure your apps look and behave as expected across the platforms you care about. We made sure that previewing your app would be as painless and efficient as possible by providing three options to test your app: (1) a Chrome-based simulator called Ripple, (2) native emulators provided by the platform vendors, and (3) deployment to an actual tethered device.

Previewing your app

Unless you’re an otherworldly developer who can get an app running perfectly without ever running it, you’ll eventually need to deploy and test it on a device or emulator for each platform. However, that’s not necessarily where you want to start. Our general guidance is as follows:

  1. For basic layout and early-stage debugging, use Ripple. Ripple is an open-source simulator that runs inside Chrome. Visual Studio automatically downloads and installs both Ripple and Chrome when you install our tools. Because Ripple uses Google’s V8 engine and blink-based rendering, it is ideal for simulating behavior on an iOS or Android device. Realistically, there are only a small number of substantial rendering differences between Chrome and IE11 these days, so it’s also a good proxy for Windows platforms. It’s nice to do your early development in Ripple because, quite frankly, it’s fast and familiar to web developers. Ripple benefits from all the CPU resources of your desktop and thousands of tiny performance optimizations designed to make desktop browsing snappy.
  2. For final validation and full-fidelity debugging, use a device. As much as we love to debug in the desktop browser, there are some minor, but significant differences between it and mobile browsers. Unfortunately, tiny differences in CSS rendering or JavaScript interpretation can have a big impact, so it’s important to test your app on the real thing. The real source of truth will always be the device. Using the native build systems (i.e. Xcode, the Android and Windows SDKs), Visual Studio can build and deploy to devices tethered to your dev machine via USB.
  3. If a device isn’t available, use an emulator. Given the range of devices and platform versions out there — especially Android versions — it’s not always possible to have a complete library of test devices. In our office, we keep a small library of representative devices including: iPods running iOS7-8, a Samsung Galaxy running Android 4.0, a Nexus 7 running Android 4.4, a Nokia 1520 running Windows Phone 8.1 and our dev machines running Windows 8.1. For everything else, we use an emulator.

For more about the previewing options available and their level of support on Android, iOS, and Windows, check out our documentation.

Find and Fix Bugs Before Your Customers Do

Finally, there will be times when you have some tough or hard-to-find bugs in your JavaScript or TypeScript code. During these times, you will need to call in your trusty friend, the debugger.

Debugging

You get all the debugging tools already familiar to Windows Store developers including the DOM Explorer, JavaScript Console, breakpoints, watches, locals, Just My Code, and more. Other diagnostic tools are not yet available.

In our initial release, we focused debugging support on Android 4.4 and Windows Store. But after hearing from developers like you, this summer we added debugging support for Android 2.3.3 and above. Debugging support for versions below Android 4.4 requires you to use a debug proxy, the most popular of which is jsHybugger.

That’s it. Now go try the tools!

If you haven’t already, please download and install the tools or try one of the trial VMs hosted in Azure. Sample apps are available using three of today’s popular frameworks: AngularJS, Backbone and WinJS + TypeScript. Once you get rolling:

Until next time, happy coding!
Ryan J. Salva

imageRyan J. Salva, Principle Program Manager, Visual Studio Client Tools team
Twitter: @ryanjsalva

Ryan is a Principal Program Manager working in the Visual Studio Client Tools team where he looks after HTML, CSS and JavaScript development. He comes from a 15 year career in web standards development & advocacy as an entrepreneur, developer and graphic designer. Today, he focuses primarily on mobile app development using web technologies and Apache Cordova.

Improving Outlook Web App options and settings

$
0
0

Will Holmes is a senior program manager on the Exchange engineering team.

We offer you lots of options and settings so you can manage Outlook Web App (OWA) the way you want to.  But what good are all of these controls if it takes too much time to find them? The OWA engineering team is always looking for ways to make features and settings more discoverable and intuitive. With that in mind, we’re excited to announce that, beginning this month, you’ll see that OWA options and settings have moved into a new navigation tree.

Early feedback overwhelmingly indicates that the new view improves discoverability, provides a cleaner user interface, enhances the experience for tablet users and delivers a scalable framework to let us add future OWA settings with minimal disruption.

What’s changing?

As has always been the case, to find the options and settings you need to customize your OWA experience, you click the gear icon in the upper corner of OWA (pictured below).

Improving Outlook Web App options and settings 1

As you can see in the drop-down menu, multiple options are available.  Today, clicking individual options in the list, or clicking the Options link, opens a pane (pictured below) where you find what you clicked and where you can find the rest of the options and settings that are available to you. Different users might see different options and settings choices depending on how their administrators have configured their experience.

Improving Outlook Web App options and settings 2

Going forward, when you click an item in the drop-down menu or the Options link, under the gear icon, you’re taken to a new interface (pictured below), where all the options and settings are now available in a single, streamlined, navigation tree on the left.

Improving Outlook Web App options and settings 3

Click any of the options in the left navigation pane to return to the corresponding settings in the right settings pane. You can see an example of this when you click Automatic replies (pictured below).

Improving Outlook Web App options and settings 4

In the coming months, these improvements will be extended to the Outlook Web App for Devices mobile apps—both on tablets and smart phones. On tablets, you’ll have access to the full set of options that are currently only available to users accessing OWA through the browser. On smart phones, you’ll have access to the options and settings best suited to that layout.

When will I see these improvements?

We’ll start rolling out these changes over the next few months. At first, you’ll see the new view with all the Mail, Calendar and People settings. The few options that we haven’t added to the new left navigation tree will still be accessible through the drop-down menu, accessed when you click the gear icon. They will also be accessible through an Other link that you’ll find if you scroll all the way to the bottom of the new left hand navigation (pictured below).  Clicking Other will open the old user interface, where everything you have access to is still available.

Improving Outlook Web App options and settings 5

In the near future, we’ll take advantage of the new navigation model and add general settings that work across the Office 365 suite—this will include settings like contact information, themes and notifications.

We considered waiting until the end of the year to move everything at once, but feedback from users about these changes was so positive that we decided to begin rolling them out now.

We’re excited about this improvement and hope you’ll find the new OWA options and settings easier to find and more streamlined than ever.

—Will Holmes

The post Improving Outlook Web App options and settings appeared first on Office Blogs.

Simpler, better instant messaging options in Lync 2013

$
0
0

Today’s post was written by Nikolay Muravlyannikov, program manager on the Lync Team.

You asked, we listened.  Today, we released an update to the Lync 2013 Windows client, which gives users the option to eliminate the images of senders and receivers in IM conversations without disabling them in the contact list.  We also grouped all of the IM options in the Options dialog to make them easier to find and made a number of other improvements to our media stack to improve call quality wherever you are.

Turn off the images of senders and receivers in the IM conversation window

Roughly a year ago, we added a feature that displays the pictures of participants in Lync IM conversations.  The goal was to make it easier to visually identify who had just sent an IM, which is particularly useful in the context of group conversations.  The downside is that including pictures takes space and as a result, a number of people asked us to provide and option to disable the pictures.  In February of this year, we took a first step, which disabled pictures throughout Lync—including both the contact list and the conversation window.

We’ve further refined this option by adding the ability to disable pictures in the IM conversation window only.  The results are shown below, with IM pictures enabled on the left and disabled on the right. Even in this simple example, you can see the screen real estate that is saved by looking at the final message from “Garrett,” which wraps and takes an extra line when pictures are enabled.

Lync IM Options v3

New IM tab in Lync – Options dialog

I can imagine some people already wondering, “Great!  How do I set the option?”  With this release we combined all IM related settings on their IM tab under Lync – Options, shown below, including the new, Hide pictures in IM setting.  This simplifies the General tab and provides an easy way to find all IM window related features.

Lync Options IM tab

Use the following link to see the complete set of details and to download the new update, and please keep providing us with feedback!

Thank you for communicating on Lync.

The post Simpler, better instant messaging options in Lync 2013 appeared first on Office Blogs.


October 2014 updates and a preview of changes to out-of-date ActiveX control blocking

$
0
0

This post describes the October updates for Internet Explorer that we are releasing today and provides a preview of updates to out-of-date ActiveX control blocking coming in November 2014.

October Updates

Microsoft Security Bulletin MS14-056 - This critical security update resolves one publicly disclosed vulnerability and fourteen privately reported vulnerabilities in Internet Explorer.  For more information see the full bulletin.

Security Update for Flash Player (3001237) - This security update for Adobe Flash Player in Internet Explorer 10 and 11 on supported editions of Windows 8, Windows 8.1 and Windows Server 2012 and Windows Server 2012 R2 is also available. The details of the vulnerabilities are documented in Adobe security bulletin APSB11-22. This update addresses the vulnerabilities in Adobe Flash Player by updating the affected Adobe Flash binaries contained within Internet Explorer 10 and Internet Explorer 11. For more information, see the advisory.

Updates to out-of-date ActiveX control blocking coming in November

As we shared back in September, and as part of our ongoing commitment to delivering a more secure browser, we want to help you stay up-to-date with the latest versions of popularly installed ActiveX controls. Today, we’d like to share two exciting updates to the out-of-date ActiveX control blocking feature: updates to our supported operating system and browser combinations and out-of-date Silverlight blocking.

Out-of-date ActiveX control blocking on Windows Vista SP2 and Windows Server 2008 SP2

Beginning January 12, 2016, we’re going to support the following operating system and browser combinations (for more info, see this announcement):

Windows operating systemInternet Explorer version
Windows Vista SP2Internet Explorer 9
Windows Server 2008 SP2Internet Explorer 9
Windows 7 SP1Internet Explorer 11
Windows Server 2008 R2 SP1Internet Explorer 11
Windows 8.1Internet Explorer 11
Windows Server 2012Internet Explorer 10
Windows Server 2012 R2Internet Explorer 11

Right now, the out-of-date ActiveX control blocking feature works on all of these combinations except Windows Vista SP2 and Windows Server 2008 SP2 with Internet Explorer 9. Support for these combinations is expected to start on November 11, 2014.

Out-of-date Silverlight blocking

Starting on November 11, 2014, we’re expanding the out-of-date ActiveX control blocking feature to block outdated versions of Silverlight. This update notifies you when a Web page tries to load a Silverlight ActiveX control older than (but not including) Silverlight 5.1.30514.0.

You can continue to view the complete list of out-of-date ActiveX controls being blocked by this feature here.

Enterprise testing for out-of-date Silverlight ActiveX control blocking

Remember, out-of-date ActiveX controls aren’t blocked in the Local Intranet Zone or the Trusted Sites Zone, so your intranet sites and trusted line-of-business apps should continue to use ActiveX controls without any disruption.

If you want to see what happens when an employee goes to a Web page with an out-of-date Silverlight ActiveX control after November 11, 2014, you can run this test.

  • On a test computer, install the most recent cumulative update for Internet Explorer.
  • Open a command prompt and run this command to stop downloading updated versions of the versionlist.xml file:
reg add "HKCU\Software\Microsoft\Internet Explorer\VersionManager" /v DownloadVersionList 
/t REG_DWORD /d 0 /f
Important: After you’re done testing, delete this registry key. If you don’t, this computer will stop receiving the updated VersionList.xml file with all of the out-of-date ActiveX controls. Because of this, we don’t recommend setting this registry key in your production environment.

  • Copy the test versionlist-TEST.xml file from here to
    %LOCALAPPDATA%\Microsoft\Internet Explorer\VersionManager\
  • Rename this file to versionlist.xml. Make sure you agree to overwrite any existing file.
  • Important: After you’re done testing, replace this file with its production version from here. We don’t recommend manually changing the versionlist.xml file in your production environment.

  • Restart Internet Explorer.
  • You’ll now get an out-of-date ActiveX control blocking notice when a Web site tries to load an outdated Silverlight ActiveX control.

    Out-of-date Silverlight blocking prompt

    If you need more time to minimize your reliance on outdated Silverlight controls, see the Out-of-date ActiveX control blocking on managed devices section of the Out-of-date ActiveX control blocking topic.

    Additional resources

    — Cassie Condon, Senior Program Manager, Internet Explorer

    — Jasika Bawa, Program Manager, Internet Explorer

    Getting Started with the Office 365 APIs

    $
    0
    0

    This weekend I had the pleasure of speaking on a couple of Office Development topics at Silicon Valley Code Camp, as well as the East Bay.NET user group meeting on Thursday (with special Halloween guest). It was great to pack three talks into one week as I’ve been doing so much internal-facing work lately, that I have been really itching to get back out to speak in front of the developer community.

    One of the areas I’ve been working in for a while is building SharePoint Apps. Office and SharePoint Apps let you customize the Office and SharePoint experiences with your own customizations. Apps are web-based, and you use HTML and JavaScript to customize Office (Outlook, Word, Excel, PowerPoint) and SharePoint itself.

    image

    For more info on apps, see the MSDN Library: Apps for Office and SharePoint

    We’ve also been working on another programming model that I’m really jazzed about. It allows you to build your own custom apps and consume data from Office 365 (Sites, Mail, Calendar, Files, Users). They are simple REST OData APIs for accessing SharePoint, Exchange and Azure Active Directory from a variety of platforms and devices. You can also use these APIs to enhance custom business apps that you may already be using in your organization.

    image

    To make it even easier, we’ve built client libraries for .NET, Cordova and Android. The .NET libraries are portable so you can use them in Winforms, WPF, ASP.NET, Windows Store, Windows Phone 8.1, Xamarin Android/iOS,. There’s also JavaScript libraries for Cordova and an Android (Java) SDK available.

    image

    If you have Visual Studio this gets even easier by installing the Office 365 API Tools for Visual Studio extension. The tool streamlines the app registration and permissions setup in Azure as well as adds the relevant client libraries to your solution via NuGet for you.

    Before you begin, you need to set up your development environment.

    image

    Note that the tools and APIs are currently in preview but they are in great shape to get started exploring the possibilities. Read about the client libraries here and the Office 365 APIs in the MSDN Library. More documentation is on the way!

    Let’s see how it works. Once you install the tool, right-click on your project in the Solution Explorer and select Add – Connected Service...

    image

    This will launch the Services Manager where you log into your Office 365 developer site and select the permissions you require for each of the services you want to use.

    image

    Once you click OK, the client libraries are added to your project as well as sample code files to get you started. The client libraries help you perform the auth handshake and provide strong types for you to work with the services easier.

    The important bits..

    const string MyFilesCapability = "MyFiles";static DiscoveryContext _discoveryContext;public static async Task<IEnumerable<IFileSystemItem>> GetMyFiles()
    {var client = await EnsureClientCreated();// Obtain files in folder "Shared with Everyone"var filesResults = await client.Files["Shared with Everyone"].
            ToFolder().Children.ExecuteAsync();var files = filesResults.CurrentPage.OrderBy(e => e.Name);return files;
    }public static async Task<SharePointClient> EnsureClientCreated()
    {if (_discoveryContext == null)
        {
            _discoveryContext = await DiscoveryContext.CreateAsync();
        }var dcr = await _discoveryContext.DiscoverCapabilityAsync(MyFilesCapability);var ServiceResourceId = dcr.ServiceResourceId;var ServiceEndpointUri = dcr.ServiceEndpointUri;// Create the MyFiles client proxy:return new SharePointClient(ServiceEndpointUri, async () =>
        {return (await _discoveryContext.AuthenticationContext.
                AcquireTokenSilentAsync(ServiceResourceId, 
                _discoveryContext.AppIdentity.ClientId,new Microsoft.IdentityModel.Clients.ActiveDirectory
                    .UserIdentifier(dcr.UserId, 
                    Microsoft.IdentityModel.Clients.ActiveDirectory
                    .UserIdentifierType.UniqueId))).AccessToken;
        });
    }

    This code is using the Discovery Service to retrieve the rest endpoints (DiscoverCapabilityAsync). When we create the client proxy, the user is presented with a login to Office 365 and then they are asked to grant permission to our app. Once they authorize, we can access their Office 365 data.

    If we look at the request, this call:

    var filesResults = await client.Files["Shared with Everyone"].
            ToFolder().Children.ExecuteAsync();

    translates to (in my case):

    GET /personal/beth_bethmassi_onmicrosoft_com/_api/Files('Shared%20with%20Everyone')/Children

    The response will be a feed of all the file (and any sub-folder) information stored in the requested folder.

    Play around and discover the capabilities. There’s a lot you can do. I encourage you to take a look at the samples available on GitHub:

    Also check out these video interviews I did this summer to learn more:

    Enjoy!

    October 2014 Updates

    $
    0
    0
    Today, as part of Update Tuesday, we released eight security updates – three rated Critical and five rated Important - to address 24 Common Vulnerabilities & Exposures (CVEs) in Windows, Office, .NET Framework, .ASP.NET, and Internet Explorer...(read more)

    KB: "HostAgentBadSharePathname" error message when you try to install System Center 2012 Virtual Machine Manager

    $
    0
    0

    KB73343332

    When you try to install System Center 2012 Virtual Machine Manager (VMM 2012 R2 or VMM 2012), the installation fails and you receive an error message that resembles the following:

    Virtual Machine Manager cannot process the request because an error occurred while authenticating Server-SCVMM-001.Contoso.com. Possible causes are:

    1) The specified user name or password are not valid.
    2) The Service Principal Name (SPN) for the remote computer name and port does not exist.
    3) The client and remote computers are in different domains and there is not a two-way full trust between the two domains.

    Log in by using an account on the same domain as the VMM management server, or by using an account on a domain that has a two-way full trust with the domain of the VMM management server, and then try the operation again. If this does not work, purge the Kerberos tickets on the VMM management server by using kerbtray.exe, available at http://www.microsoft.com/en-us/download/details.aspx?id=17657. Then, reset the SPN for Server-SCVMM-001.Contoso.com by using setspn.exe. If this still does not fix the problem, make Server-SCVMM-001.Contoso.com a member of a workgroup instead of a domain, restart the computer, rejoin the domain, and then try the operation again.

    For all the details and a resolution, please see the following:

    KB3004796 - "HostAgentBadSharePathname" error message when you try to install System Center 2012 Virtual Machine Manager (http://support.microsoft.com/kb/3004796)

    J.C. Hornbeck| Solution Asset PM | Microsoft GBS Management and Security Division

    Get the latest System Center news onFacebookandTwitter:

    clip_image001clip_image002

    System Center All Up: http://blogs.technet.com/b/systemcenter/
    System Center – Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
    System Center – Data Protection Manager Team blog: http://blogs.technet.com/dpm/
    System Center – Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
    System Center – Operations Manager Team blog: http://blogs.technet.com/momteam/
    System Center – Service Manager Team blog: http://blogs.technet.com/b/servicemanager
    System Center – Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

    Windows Intune: http://blogs.technet.com/b/windowsintune/
    WSUS Support Team blog: http://blogs.technet.com/sus/
    The AD RMS blog: http://blogs.technet.com/b/rmssupp/

    App-V Team blog: http://blogs.technet.com/appv/
    MED-V Team blog: http://blogs.technet.com/medv/
    Server App-V Team blog: http://blogs.technet.com/b/serverappv

    The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
    The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
    The Forefront TMG blog: http://blogs.technet.com/b/isablog/
    The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

    RDP Protocol Documentation Update

    $
    0
    0

    Hello Everyone,

    Periodically we receive questions on how RDP fits into the protocols that Microsoft produces. The Remote Desktop Protocol (RDP) is part of the Open Specifications initiative. For example, through the Open Specifications program, developers can build their own RDP client implementations using the provided technical documentation. A list of RDP protocol documentation related to RDP can be found here.

    Documentation Errata

    We are now publishing errata for the protocol documentation with enabled RSS / ATOM feeds so that developers can stay up to date on any changes. Below are links to these newly available resources:

    Windows 10 Technical Preview Protocol Documentation

    For the Windows 10 Technical Preview we have published Preview documentation for new and updated protocols in PDF Diff format. The Diff versions contain revision marks showing changes specific to the Windows 10 Technical Preview, as well as document updates made since those documents were last published on May 15, 2014. The documents can be found at http://msdn.microsoft.com/en-us/library/ee941641.aspx

    We are making these changes to allow ISV’s easier access to the information they need in developing RDP clients.

    Note: Questions and comments are welcome. However, please DO NOT post a request for troubleshooting by using the comment tool at the end of this post. Instead, post a new thread in the RDS & TS forum. Thank you!

    Success with Enterprise Mobility: Identity

    $
    0
    0

    STB_Banners_WhatsNext2

    Throughout this series I’ve written quite a bit about identity management and its pivotal role in any enterprise mobility strategy. While I don’t want to be too repetitious on this topic, I do think it’s important to continually emphasize its ongoing value.

    Any strategy that attempts to enable device usage anywhere with any platform has to give you the tools to set policies about how corporate data is accessed and used. This seemingly simple (but incredibly difficult) process is all based on your infrastructure’s ability to identify the individuals and devices accessing your network. Identity management helps keep your data in the right hands at the right times.

    As enterprises continue to consume more and more SaaS offerings (the workforce in an average enterprise uses more than 300 SaaS apps!), IT has to take an active position when it comes to extending identity management to each of these SaaS apps. Today the majority of SaaS apps that are being used are completely unmanaged by IT – and this puts corporate reputation and assets at risk.

    When we look at the big trends and challenges the IT industry is facing, identity management is the key element at play in all of them. For example: The device-based consumerization of IT would be impossible if we couldn’t quickly and easily verify and manage a user’s identity and their devices. A move to a cloud-based or hybrid cloud-based IT infrastructure would be impossible if there wasn’t a way to manage access, and compounding that problem, all your carefully gathered data would be worthless if there wasn’t a simple way to identify who should (and should not) be able to access it.

    Identity management is an area where Microsoft excels because it is a big part of our DNA as a company. Today, over 90% of businesses around the world (and 95% of the Fortune 1000) use Active Directory for their identity management. We have spent millions of person-hours building and fine tuning software that enables enterprises to expand their on-prem investments to the cloud – and now we have optimized our solutions for device management with Azure Active Directory (you can read about AAD in depth here).

    Whenever I get the opportunity to look at the scale and usage of Azure Active Directory I am really impressed. AAD is the premiere Enterprise Identity solution that’s delivered as cloud service. To give you an idea of its scale and power, it is servicing up to 18 billion authentication requests every day. There are 4 million organizations using AAD to manage access to their Microsoft Enterprise services (e.g. Azure, Office 365, EMS, etc.) and it is time to extend AAD’s trusted, reliable functionality to all of the SaaS apps your organization uses.

    Considering the massive install base of AD, it is safe to say that the industry would prefer not to reinvent the wheel or manually recreate all of their identities in the cloud. The good news is that this kind of reinvention is unnecessary since this is exactly what Azure Active Directory (AAD) provides in a secure and comprehensive way. AAD combines directory services, advanced identity governance, application access management, and a developer’s identity management platform.  Impressive, right?

    Using Azure Active Directory to Set Your Organization Apart

    When building your enterprise mobility solution, you want it to deliver a small handful of critical things that I believe you should list as requirements around identity:

    • Integration into your existing infrastructure.
    • Easy syncing of your internal AD identities with 3rd party SaaS apps – and bring them under common management.
    • Easy syncing with your on-prem directories (aka Active Directory).
    • Self-service capabilities like password reset, group management, user profile, management, etc.

    These areas are where Azure Active Directory really shines – especially the AAD Premium capabilities that are a part of the Enterprise Mobility Suite.

    As noted earlier in this series, one of the key benefits AD has been providing for years is centralized identity management and access control across the enterprise + a great SSO experience for the end-users consuming enterprise services. Now, as organizations use more and more SaaS offerings (e.g. salesforce.com, Office 365, Workday, etc.), a centralized identity management solution is more important than ever. A centralized identity management solution is critical if you want to manage SaaS apps, protect that information from being stored and accessed in those SaaS apps, and provide a SSO experience to end users.

    One possible way to deliver this kind of functionality is to federate each user with each and every cloud-based app. The challenge, however, is that not all apps use the same protocols or standards when it comes to identity management. This can make federation a very complex and costly operation. What organizations really need is a hub that can do six key things:

    1. Connect SaaS identities with their on-prem Active Directory users.
    2. Seamlessly connect with a variety of cloud applications.
    3. Integrate with various web protocols.
    4. Scale around the globe to authenticate users in any location, from any device, in a way that integrates simply with their existing identities.
    5. Provide SSO to all these apps for users.
    6. And you do not want to have to do all this integration yourself. That’s why we do it for you.

    the most common scenarios that organizations of all sizes will face as they manage identities in the public cloud:

    • Many applications, one identity repository.
    • Managing identities and access to cloud applications.
    • Monitoring and protecting access to enterprise applications.
    • Personalizing access and self-service capabilities.

    You need to insist that your mobility partners/vendors provide comprehensive solutions for these four scenarios – and that solution needs to seamlessly connect to the on-prem work where you’ve already invested. 

    These four areas are places where, I’m proud to say, AAD can consistently deliver at enterprise grade.

    Sync & Federation with AAD

    AAD allows you to sync with the on-prem Windows Server Active Directory using DirSync combined with either Active Directory Federation Services (ADFS), or, alternatively, with password hash sync. This setup helps to configure SSO and, to make SSO even easier, the most popular cloud apps are already pre-integrated in the application gallery – no matter what kind of public cloud is doing the hosting.

    This kind of integration goes way beyond simple compatibility. Remember that, in every scenario, you are in complete control of what is synchronized from AD into AAD. Our services (like Office 365 and the Enterprise Mobility Suite) only need to have the users’ identity and four attributes in AAD. The users’ password is not one of those attributes, thus you can keep all the passwords in your local ID if you so choose

    We have already done the work to integrate more than 2,400 of the most popular SaaS apps with AAD, and this fully enables the scenarios described above. We’ve also preconfigured all the parameters needed to federate with these clouds so that an administrator can select the cloud applications their enterprise is already using and configure SSO accordingly. With your identities and apps under control, the Azure Management portal allows for super-efficient management with a section specifically for AAD administration that allows you to take your custom LOB apps (or the ones you’ve bought from a vendor) and enable them for SSO.

    Dollars and Cents: The Value of Cloud-based Identity Management

    Once you’re operating your identity management solution from the cloud, your ability to manage a growing number of users and SaaS apps from the same console with the same processes becomes an invaluable advantage.

    Access isn’t the only element that benefits from a top-tier identity management solution, however. Your ability to govern the creation, publishing, and usage of SaaS apps (which can be used via single sign-on) is a huge productivity booster for both you and your end users.

    There’s not an IT team in the world that goes more than a few minutes without thinking about security – and this is something we think a lot about, too. This is why AAD is based on Trustworthy Computing principals and security is a foundational part of its architecture. I recommend reading that site’s information about just how secure that data is. It is really impressive stuff.

    This is all delivered through Azure Active Directory Premium (a component of the Enterprise Mobility Suite) and it is an incredibly high quality foundation for any Enterprise Mobility strategy.

    When it comes down to authentication and control of corporate resources, not only should your Enterprise Mobility identity solution require the user to correctly authenticate, but that identity solution should also know about all the devices being used to access corporate resources. This is exactly what Domain Join has done for Windows devices over the past 15 years. In our Enterprise Mobility solution we have added what you can think of as a modern Domain Join – what we call Workplace Join. Workplace join enables users to register their personal devices with AAD which allows IT to express policy on both the user and the device.

    The “Managed Everything” Model

    I previously wrote about the “Managed Everything” approach to infrastructure, and identity plays a crucial role here. Simply put, too many IT teams are saddled with one set of tools for PC management, another set for device management, yet another for server-based computing scenarios, and then something else for identity management.

    Common? Yes. Smart? No.

    This approach makes a lack of integration/interoperability and compromised agility a foundational part of your infrastructure, and it guarantees a fragmented experience that is more expensive and more difficult to operate. Instead, start with a solution that can manage identity no matter where the person or their hardware travels, and then build around this carefully managed structure.

    To get a lot of additional information about Microsoft’s cloud-based identity management solutions, check out this very helpful Hybrid Identity Management site.

    Give Skype Qik a try

    $
    0
    0

    Skype Qik is a new video messaging app that runs alongside Skype on your device and gives you a fun and easy way to capture and share quick moments and thoughts with friends and family. While I regularly have Skype calls with my dad (who lives in a different state) – this app can be used in between those calls for quick conversations and to share moments with him even though he’s in another state. It can help me make my dad feel more involved in my life. Ok mushy stuff aside, it can also be used to send quick funny and annoying messages to your best friend too.

    407a6ce7-8df7-465e-9751-afbb4c78501c 43654d27-814b-4fad-9c2f-feaf155e2653 0612bd95-f2da-4b7a-b72d-40a5e7e66be9

    Download it from the Windows Phone Store and give it a try!


    Security Advisory 3009008 released

    $
    0
    0
    Today, we released Security Advisory 3009008 to address a vulnerability in Secure Sockets Layer (SSL) 3.0 which could allow information disclosure. This is an industry-wide vulnerability that affects the protocol itself, and is not specific to Microsoft’s...(read more)

    Microsoft Virtual Machine Converter 3.0 Now Available

    $
    0
    0

    Yesterday we released the Microsoft Virtual Machine Converter 3.0.  You can download it here: http://www.microsoft.com/en-us/download/details.aspx?id=42497

    It has a staggering amount of features.  Including the ability to convert physical computers to virtual machines and to convert VMware virtual machines to Hyper-V virtual machines online and offline.

    Cheers,
    Ben

    New Windows Server containers and Azure support for Docker

    $
    0
    0

    In June, Microsoft Azure added support for Docker containers on Linux VMs, enabling the broad ecosystem of Dockerized Linux applications to run within Azure’s industry-leading cloud. Today, Microsoft and Docker Inc. are jointly announcing we are bringing the Windows Server ecosystem to the Docker community, through 1) investments in the next wave of Windows Server, 2) open-source development of the Docker Engine for Windows Server, 3) Azure support for the Docker Open Orchestration APIs and 4) federation of Docker Hub images in to the Azure Gallery and Portal.

    Many customers are running a mix of Windows Server and Linux workloads and Microsoft Azure offers customers the most choice of any cloud provider. By supporting Docker containers on the next wave of Windows Server, we are excited to make available Docker open solutions across both Windows Server and Linux. Applications can themselves be mixed; bringing together the best technologies from the Linux ecosystem and the Windows Server ecosystem. Windows Server containers will run in your datacenter, your hosted datacenter, or any public cloud provider – and of course, Microsoft Azure.

     

    DockerAndAzureEcosystem

     

    Windows Server Containers

    Windows Server containers provide applications an isolated, portable and resource controlled operating environment. This isolation enables containerized applications to run without risk of dependencies and environmental configuration affecting the application. By sharing the same kernel and other key system components, containers exhibit rapid startup times and reduced resource overhead. Rapid startup helps in development and testing scenarios and continuous integration environments, while the reduced resource overhead makes them ideal for service-oriented architectures.

    The Windows Server container infrastructure allows for sharing, publishing and shipping of containers to anywhere the next wave of Windows Server is running. With this new technology millions of Windows developers familiar with technologies such as .NET, ASP.NET, PowerShell, and more will be able to leverage container technology. No longer will developers have to choose between the advantages of containers and using Windows Server technologies.

    WindowsServerContainer

     

    Windows Server containers in the Docker ecosystem

    Docker has done a fantastic job of building a vibrant open source ecosystem based on Linux container technologies, providing an easy user experience to manage the lifecycle of containers drawn from a huge collection of open and curated applications in Docker Hub. We will bring Windows Server containers to the Docker ecosystem to expand the reach of both developer communities.

    As part of this, Docker Engine for Windows Server containers will be developed under the aegis of the Docker open source project, where Microsoft will participate as an active community member. Windows Server container images will also be available in the Docker Hub alongside the 45,000 and growing Docker images for Linux already available.

    Finally, we are working on supporting Docker client natively on Windows Server. As a result, Windows customers will be able to use the same standard Docker client and interface on multiple development environments.

    DockerWithWindowsSrvAndLinux

    You can find more about Microsoft’s work with the Docker open source project on the MS Open Tech blog here.

    Docker on Microsoft Azure

    Earlier this year, Microsoft released Docker containers for Linux on Azure, offering the first enterprise-ready version of the Docker open platform on Linux Virtual Machines on Microsoft Azure, leveraging the Azure extension model and Azure Cross Platform CLI to deploy the latest and greatest Docker Engine on each requested VM. We have seen lots of excitement from customers deploying Docker containers in Azure as part of our Linux support.

    As part of the announcement today, we will be contributing support for multi-container Docker applications on Azure through the Docker Open Orchestration APIs. This will enable users to deploy Docker applications to Azure directly from the Docker client. This results in a dramatically simpler user experience for Azure customers; we are looking forward to demonstrating this new joint capability at Docker’s Global Hack Day as well as at the upcoming Microsoft TechEd Europe conference in Barcelona.

    Furthermore, we hope to energize Windows Server and Linux customers by integrating Docker Hub into the Azure Gallery and Management Portal experience. This means that Azure customers will be able to interact directly with repositories and images on Docker Hub, enabling rich composition of content both from the Azure Gallery and Docker Hub.

    In summary, today we announced a partnership with Docker Inc. to bring Windows Server to the Docker ecosystem and improve Azure’s support for the Docker Engine and Orchestration APIs and to integrate Docker Hub with the Azure Gallery and Management Portal.

    Azure is placing a high priority on developer choice and flexibility including first-class support for Linux and Windows Server. This expanded partnership builds on the Azure’s current support for Docker on Linux and will bring the richness of the Windows Server and .NET ecosystem to the Docker community. It is an exciting time to be in the Azure cloud!

    Docker and Microsoft: Integrating Docker with Windows Server and Microsoft Azure

    $
    0
    0

    I’m excited to announce today that Microsoft is partnering with Docker, Inc to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

    Docker is an open platform that enables developers and administrators to build, ship, and run distributed applications. Consisting of Docker Engine, a lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

    Earlier this year, Microsoft released support for Docker containers with Linux on Azure.  This support integrates with the Azure VM agent extensibility model and Azure command-line tools, and makes it easy to deploy the latest and greatest Docker Engine in Azure VMs and then deploy Docker based images within them.  

    Docker Support for Windows Server + Docker Hub integration with Microsoft Azure

    Today, I’m excited to announce that we are working with Docker, Inc to extend our support for Docker much further.  Specifically, I’m excited to announce that:

    1) Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server.  This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers.  Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools.  It will also enable a new class of distributed applications built with Docker that use Linux and Windows Server images together.

    image 

    2) We will support the Docker client natively on Windows.  Developers and administrators running Windows will be able to use the same standard Docker client and interface to deploy and manage Docker based solutions with both Linux and Windows Server environments.

    image

     

    3) Docker for Windows Server container images will be available in the Docker Hub alongside the Docker for Linux container images available today.  This will enable developers and administrators to easily share and automate application workflows using both Windows Server and Linux Docker images.

    4) We will integrate Docker Hub with the Microsoft Azure Gallery and Azure Management Portal.  This will make it trivially easy to deploy and run both Linux and Windows Server based Docker images in Microsoft Azure.

    5) Microsoft is contributing code to Docker’s Open Orchestration APIs.  These APIs provide a portable way to create multi-container Docker applications that can be deployed into any datacenter or cloud provider environment. This support will allow a developer or administrator using the Docker command line client to launch either Linux or Windows Server based Docker applications directly into Microsoft Azure from his or her development machine.

    Exciting Opportunities Ahead

    At Microsoft we continue to be inspired by technologies that can dramatically improve how quickly teams can bring new solutions to market. The partnership we are announcing with Docker today will enable developers and administrators to use the best container tools available for both Linux and Windows Server based applications, and to run all of these solutions within Microsoft Azure.  We are looking forward to seeing the great applications you build with them.

    You can learn more about today’s announcements here and here.

    Hope this helps,

    Scott omni

    Visual Studio Online Update – Oct 14th

    $
    0
    0

    Today we began releasing our Spring 72 build on VS Online.  You can read the release notes on the site.  As usual, it will take a couple of days for the changes to propagate across all accounts.

    This sprint was a pretty light sprint.  We released the “Test artifacts as work items” changes, previously made available in TFS 2013.3, that enable customization (eventually), auditing, permissioning, querying, etc.  We also introduced a small new feature to enabling copying formatted query results to the clipboard.

    The truth is we spent much of this sprint making progress on some of the fundamentals underlying the live site incidents we had a couple of months ago.  We’ve been introducing a new lock manager, implementing a circuit breaker pattern, etc.  We’ve got enough code to go through that it’s a non-trivial undertaking.  I think we’re going to have some exciting stuff to talk about in the next month or so though.  Stay tuned…

    Thanks,

    Brian

    Viewing all 13502 articles
    Browse latest View live