Sunday, June 19, 2016

What's new in Hyper-V Private Cloud on Windows Server 2016

         1.    IN place Cluster upgrade

Finally, we can have in-place cluster upgrade feature, you can now add a node running Windows Server 2016 to a Hyper-V Cluster with nodes running Windows Server 2012 R2. The cluster continues to function at a Windows Server 2012 R2 feature level until you upgrade all of the nodes in the cluster and you upgrade the cluster functional level with the following Windows PowerShell cmdlet, Update-ClusterFunctionalLevel.
After you update the cluster functional level, you can't downgrade it back to Windows Server 2012 R2. Seems it’s going to work as Active Directory functional level.

When the Hyper-V Cluster has a mix of both Windows Server 2012 R2 and Windows Server 2016 nodes, you can still move virtual machines between all of the nodes in the Hyper-V Cluster.
After you upgrade the cluster functional level to Windows Server 2016, the following applies:
·         To enable new virtual machine features, you need to manually upgrade the virtual machine configuration level of the virtual machines using the Update-VmConfigurationVersion cmdlet.
·         You can enable new Hyper-V features.
·         You can't add a node to the Hyper-V Cluster that runs Windows Server 2012 R2.

2.    Nested virtualization

Yes, you can have this feature now. It will let you use a virtual machine as a Hyper-V host and create virtual machines within that virtualized host. This can be especially useful for development and test environments. To use nested virtualization, you'll need:
·         At least 4 GB RAM available for the virtualized Hyper-V host.
·         To run at least Windows Server 2016 or Windows 10 build 10565 on both the physical Hyper-V host and the virtualized host. Running the same build in both the physical and virtualized environments generally improves performance.
·         A processor with Intel VT-x (nested virtualization is available only for Intel processors at this time).

3.    Hot add and remove for network adapters and memory

Yes, finally this also available in VM G2 version, now you can add or remove a network adapter while the virtual machine is running, without incurring downtime. And its support Windows or Linux operating systems.
You can also adjust the amount of memory assigned to a virtual machine while it's running, even if you haven’t enabled Dynamic Memory. This works for both generation 1 and generation 2 virtual machines.

4.    Networking features

New networking features include:
·         Remote direct memory access (RDMA) and switch embedded teaming (SET). You can set up RDMA on network adapters bound to a Hyper-V virtual switch, regardless of whether SET is also used. SET provides a virtual switch with some of same capabilities as NIC teaming. For details, see Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET).
·         Virtual machine multi queues (VMMQ). Improves on VMQ throughput by allocating multiple hardware queues per virtual machine. The default queue becomes a set of queues for a virtual machine, and traffic is spread between the queues.
·         Quality of service (QoS) for software-defined networks. Manages the default class of traffic through the virtual switch within the default class bandwidth.

5.    Linux Secure Boot

Linux operating systems running on generation 2 virtual machines can now boot with the Secure Boot option enabled.  Ubuntu 14.04 and later, SUSE Linux Enterprise Server 12 and later, Red Hat Enterprise Linux 7.0 and later, and CentOS 7.0.

6.    Production checkpoints

Production checkpoints allow you to easily create “point in time” images of a virtual machine, which can be restored later on in a way that is completely supported for all production workloads. 

7.    Guest Integration services through Windows Update

Yes, you can have Guest Integration service updates through Windows Update.

8.    Shielded Guest virtual machines

Shielded virtual machines use several features to make it harder for datacenter administrators and malware to inspect, tamper with, or steal data and the state of these virtual machines. Data and state is encrypted.

9.    Windows PowerShell Direct

This will be easy and reliable way to run Windows PowerShell commands inside a virtual machine from the host operating system. There is no network, firewall requirements, or special configuration. It works regardless of your remote management configuration. To use it, you must run Windows 10 or Windows Server 2016 Technical Preview on the host and the virtual machine guest operating systems.

10. Windows Containers

Windows Containers allow many isolated applications to run on one computer system.

Sunday, May 15, 2016

How/Why: Hyper-V Server failover cluster

Microsoft has long offered a version of Hyper-V Server, but, for whatever reason, Hyper-V Server has gained a reputation for only being appropriate for use in a lab environment. Believe it or not, it is possible to deploy the Hyper-V Server in a way that allows your Hyper-V virtual machines to be made highly available.
In order to build a fault-tolerant Hyper-V deployment, there are a few things that you'll need.
First, you will need a storage array that can be used for shared storage (CSV). These storage requirements are the same as for any other Hyper-V deployment. Next, you need a copy of Hyper-V Server, which can be found on the Microsoft website.
Third, you require a basic understanding of how failover clustering is normally deployed and configured. Having some up-front knowledge of failover clustering will make it much easier to build a cluster based on Hyper-V Server.
Finally, to build a fault-tolerant Hyper-V deployment, you must have a general knowledge of PowerShell. If your PowerShell comprehension is a little rusty, I recommend taking advantage of the Sconfig.cmd utility. This utility provides a menu-driven interface for configuring a server. Employing this utility will minimize the amount of PowerShell that you will have to use.
The first step in building a failover cluster using the Hyper-V Server is to install Hyper-V Server on each server that will act as a cluster node. Once it has been installed, you will need to use the Sconfig.cmd to establish the initial configuration for each server. This means assigning an IP address to each network interface card (NIC), giving each node a unique and meaningful computer name, joining an Active Directory domain, and enabling remote management. All of these tasks can be easily completed using the Sconfig.cmd utility.
Once you complete the initial configuration process, you must make a few decisions regarding your failover cluster. You will need to choose a name and an IP address for the cluster. You will also need to figure out how you are going to connect the cluster nodes to the shared storage. The easiest solution is to create two Server Message Block file shares. One of these shares will be used as shared storage, while another is used as a File Share Witness.
For the sake of demonstration, let's pretend that you wanted to create a cluster with a cluster name of "Cluster1" and a cluster IP address of 192.168.0.1. Let's also assume that the NIC that you want to use for cluster communications on each cluster node is named "Ethernet 2" -- you can get the actual NIC name by using the Get-NetAdapter cmdlet. Now imagine that your cluster nodes are named "Hyper-V-1," "Hyper-V-2" and "Hyper-V-3". Finally, we will need a Universal Naming Convention path for our File Share Witness. We will also need to assign a name to the Hyper-V virtual switch. For the sake of this demonstration, I will use "Switch1" as the virtual switch name -- each node must use the same virtual switch name -- and I will use "\\storage\witness" as the File Share Witness path. Given those conventions, the commands used to build a failover cluster would be:
Install-WindowsFeature –Name Failover-Clustering –IncludeManagementTools
New-VMSwitch "Switch1" –NetAdapterName "Ethernet 2" –AllowManagementOS:$True
Test-Cluster –Node Hyper-V-1,Hyper-V-2,Hyper-V-3
New-Cluster –Name Cluster1 –Node Hyper-V-1,Hyper-V-2,Hyper-V-3 –StaticAddress 192.168.0.1
Set-ClusterQuorum –Cluster Cluster1 –NodeAndFileShareMajority \\Storage\Witness
The only thing left to do at this point is to connect your shared storage to the cluster. The method that you will use to do so will vary depending on the type of storage that you are using. You can use the Add-ClusterDisk cmdlet to get the job done, but I advise installing the Failover Cluster Manager onto another Windows Server, one that has a graphical user interface, and use that tool to add storage to the cluster. That way you won't have to worry about the complexities of configuring shared storage from the command line.

As you can see, it is possible to achieve high availability using the Hyper-V Server. That being the case, you may be wondering why any organization would pay for a Windows Server license for their Hyper-V nodes. The answer usually comes down to VM licensing. Windows Server 2012 R2 Datacenter Edition for instance allows a properly licensed Hyper-V host to run an unlimited number of Windows Server 2012 R2 VMs. Without such a license, VM licensing must be handled separately, which can be costly and complicated.

Monday, May 25, 2015

How to check DC&RODC authentication issues

By default, client will contact the DC in same site. And the site is configured in “AD sites and services”. We can simulate the DC locator process and gather network monitor package to see what happens during finding DC. Following is action plan:

Download Network Monitor 3.3 and install it on client (by default path)
http://www.microsoft.com/downloads/details.aspx?FamilyID=983b941d-06cb-4658-b7f6-3088333d062f&displaylang=en

Nltest.exe is a command tool in “support tools”. Please download support tool from:
http://www.microsoft.com/downloads/details.aspx?FamilyID=6ec50b78-8be1-4e81-b3be-4e7ac4f0912d&DisplayLang=en.

On client, open a command prompt and run following command to start Network Monitor:
"%ProgramFiles%\Microsoft Network Monitor 3\nmcap" /network * /capture /file %ComputerName%_test.cap:50M /DisableConversations /DisableLocalOnly

On client, open another command prompt, navigate the path in which nltest.ext resides, run following command:
ipconfig /flushdns
nltest /dsgetdc:domainname /force > dsgetdc.txt
nltest /dsgetsite > dsgetsite.txt
set l > setl.txt
NOTE:please place the “domainname” with your real domain name
In the command prompt of step 3, click “CTRL+C” to stop network monitor. The log is at current path.

Find the dsgetdc.txt ,dsgetsite.txt and setl.txt result s in same place

How to Enable ADFS Change Password Portal without Workplace Join

This summary is not available. Please click here to view the post.