Skip to content

Automate Log Off of Old Sessions on Horizon View 7

Virtualization with VMWare Horizon 7 has been a wonderful thing for the world of public access computers. Users are free to do what they like, and at the end of the day the machines can be wiped clean.

Unfortunately, the web interface does not provide a convenient way to schedule refreshes at a particular time – so this has been left to scheduled tasks for logoff within the VM themselves, but for some reason this approach seems to be unreliable in my environment, often leaving about a dozen machines still running out of the approximately one hundred and requiring manual interaction.

In earlier versions of View or Horizon View such as 5.1, automating logoff would have been a simple task with the PowerCLI extensions, and example scripts could be found on a number of websites. VMWare has since changed their API, and newer versions of the Horizon View helper Powershell scripts do not use the same method names – if a direct equivalent existed at all.

Thankfully, the helper scripts are open source and there are active contributors who have recently added some much needed functionality to make this task possible without directly interacting with the VMWare API.

First, if this task is to be automated, credentials must be provided.
This can be accomplished using built in Powershell XML serialization.
Fist, input the credentials required to connect to the Horizon View Connection server – make sure to include the domain.
$credential = Get-Credential
Next, export the credential information to the XML file.
$credential | Export-CliXml -Path 'C:\Scripts\credential.xml'

Now that the credentials have been encrypted using a user and machine specific key, it can be imported for use in future Horizon View Powershell scripts running locally on the same machine.

To load the XML and use it as credentials again, use the following:
$credential = Import-CliXml -Path 'C:\Scripts\credential.xml'

The Horizon View helper connection method can make use of the imported credentials.
$hvServices = Connect-HVServer localhost -Credential $credential

Now that the connection has been established, information about current sessions can be collected using the Get-HVLocalSession method. This will return SessionLocalSummaryView objects with a few properties, including SessionId, SessionNamesData, SessionLocalReferenceData, and SessionData.

The methods and properties can be listed for any of these by piping the output into Get-Member.
PowerCLI C:\> Get-HVLocalSession | Get-Member
PowerCLI C:\> (Get-HVLocalSession).Id | Get-Member

The portions of the session information that will be required for this script to function are the session Id and session StartTime. I will also be using the machine name for some output that might be useful if running the script in a console window.

#Load the saved credential
$credential = Import-CliXml -Path 'C:\Scripts\cred.xml'
#Connect to the Horizon View connection server running on the local machine
$hvServices = Connect-HVServer localhost -Credential $credential
#Get the list of session summaries
$sessionSummaries = Get-HVLocalSession
#For each session in the list
foreach ($sessionSummary in $sessionSummaries) {
#Record the session id
$sessionId = $sessionSummary.Id
#Record the machine name
$machineName = $sessionSummary.NamesData.MachineOrRDSServerName
#Get the start time for the session, then find the difference between the current time and the time the session was established
$sessionTime = (New-TimeSpan -Start $sessionSummary.SessionData.StartTime).TotalHours
#Only log the session off if the start time was more than 8 hours ago
if ([int]$sessionTime -gt 8) {
Write-Host "Machine with name $machineName has been connected for $sessionTime hours."
#Uncomment this line to actually log the session off
#$hvServices.ExtensionData.Session.Session_Logoff($sessionId)
}
}
Disconnect-HVServer -Server $hvServices -Confirm:$false

The line to log the session off has been left commented out for testing, running the script as is will output a list of machines with sessions that started more than 8 hours previous.

To use the script, first export the credential information to an XML file and update the file path as required, adjust the number of hours used for comparison as desired and, after a test run to verify that the desired results are produced, uncomment the final line.

Finally, the scheduled task can be created on the Horizon View connection server.
The trigger should be a particular time ( the one that would be used for comparison to the start time ) and the action should be to start a program.
The program command line is “C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe” with the arguments reflecting the path to the script. E.g. “C:\Scripts\logoffOldSessions.ps1”
Make sure that the user account set to run this task is the same one used to create the credential XML.

Download VMWare PowerCLI: https://code.vmware.com/web/dp/tool/vmware-powercli/
VMWare Helper Scripts: https://github.com/vmware/PowerCLI-Example-Scripts/tree/master/Modules/VMware.Hv.Helper
Installation Instructions: https://blogs.vmware.com/euc/2017/01/vmware-horizon-7-powercli-6-5.html

Advertisements

Learning About Mining with Nicehash on Nvidia 9×0 series GPUs

I fairly recently decided to jump in on mining ‘altcoin’, and was looking for a straight forward low effort way to manage my hardware and the coins, wallets, etc. I started knowing very little, other than what a crypto currency was and that I could mine profitably with the spare last generation graphics hardware I had.

Nicehash met this goal by providing a simple Windows client, exchange to one coin type for payment ( Bitcoin ), automatic algorithm switching, and some basic worker activity monitoring with notifications.

My current GPUs are two GTX 970 cards and one GTX 980 ti.

Initially I was mining with only the 970 cards, and letting the Nicehash client run with the default settings. Things were off to a promising start, but the performance suddenly tailed off, producing less than an estimated $0.65 per day combined.

After much research, I found that the Daggerhashimoto algorithm that was being selected as most profitable had changed epochs – up from the 140 used for benchmarking. On other cards with 4GB of VRAM this may not be an issue, but the controller on the GTX 970 seems to take issue with how the memory is used or accessed – even though the DAG file is still technically less than 3GB. I’m not sure if it is related to the problem with memory over 3.5GB for gaming, but it seems like a reasonable guess.

In the end, the only solution was to disable the algorithm.

Along the way while attempting to reach the mythical 22MH/s per card running Daggerhashimoto ( which I did get once somehow, but never since ), I did learn that setting the p0 state for compute combined with a decent memory ( +450 ) and core ( +160 ) overclock makes a big difference for most of the other popular algorithms as well – the ones I’ve seen frequently are Equihash, Lyra2REv2, Nist5, and NeoScrypt.

Watching the graphs with near real time estimated hashrates and profitability along with the current exchange rate of Bitcoin to USD was a fun way to burn a few minutes.

Unfortunately, the time of easy profit seems to be over. Where the two 970 cards were generating .0005 BTC a day back in January, which paid for the purchase of a much needed water cooling kit for the GTX 980 ti, I’m now lucky to average .0004 BTC a day with all three. Relatively weak exchange of Bitcoin to fiat ( compared to it’s peak in January ) and uncertainty about regulation or fraud across the market in general have damped the short February rebound that boosted ZCash etc. ( Equihash algorithm ) and worked out very well on my Nvidia cards.

For now I have access to very cheap power and intend to HODL, hoping the maket improves. It’s always tempting to pick up more hardware even with the payout continually dwindling and estimated year plus ROI, but I can’t bring myself to purchase with such ridiculous prices on the 10 series GPUs and Nvidia’s expected announcement of next gen cards so close.

All said, I’ve been very happy with the two 970 cards on air at 65*c the vast majority of the time, but the 980 ti ran hot enough to make me uncomfortable. I can definitely recommend the Kraken G12 and Corsair H55 – the full load temperatures are 20*c cooler, down from 75*c to 55*c.

So, readers; do you mine with your 970 and 980 cards? What’s your best overclock or benchmark?

Windows Server System State Backup on VMWare ESXi with VMWare Tools installed Fails

Problem Description:

Windows Server Backup may fail to complete a System State backup on Windows Server when VMWare Tools is installed with the following error in the error log (located in C:\Windows\Logs\WindowsServerBackup\Backup_Error-{date}_{time}.log)

Error in backup of C:\windows\\systemroot\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect.

Cause:

During installation, the path for one of the VMWare tools drivers, vsock.sys, is set incorrectly.

This can be verified using the diskshadow utility.

First, at an elevated command prompt, type the following, then press ENTER:

DiskShadow /L writers.txt

The prompt should change to DISKSHADOW>

Next, type the following, then press ENTER:

list writers detailed

This will take a while, eventually listing all of the writers and affected volumes.

Once complete, type EXIT, then press enter. Open the writers.txt file, then a search for ‘windows\\’ should find the following:

– File List: Path = c:\windows\\systemroot\system32\drivers, Filespec = vsock.sys

Solution:

Run REGEDIT, then navigate to

HKLM\SYSTEM\CurrentControlSet\Services\vsock

Then change the ImagePath value string data from the incorrect ‘\systemroot\system32\DRIVERS\vsock.sys’ to

System32\DRIVERS\vsock.sys

Unitrends Backup, FreeNAS, and VMWare iSCSI

In the process of upgrading from existing physical servers to an (almost) fully virtualized environment, a new backup system was required.

The existing backup system is USB, utilizing 1TB Tandberg RDX QuikStor cartridges managed by Backup Exec on one of the physical servers running Server 2008 R2.

Backups are run nightly, with one cartridge weekly taken off site.

There were two physical servers, Microsoft Exchange, and the Active Directory Domain Controller / Fileserver.

The new system has one physical Server 2016 domain controller, two VMWare ESXi hosts, an HP SAN, a virtual Server 2016 domain controller, the virtual Microsoft Exchange server, and a virtual MX integration box that emails voicemail from the VOIP system (was moved p2v, previously not backed up).

The goal is to produce scheduled daily backups in the rack on storage separate from the SAN, then replicate the backup offsite to another datacenter that houses the public side servers over a 100Mbit fiber WAN link.

Unitrends backup solution seemed ideal for this, I had used the free version for testing and found that the backup fit within the 1TB storage limitation quite easily.

The FreeNAS box is an HP server with 20GB of RAM, two 8-core Intel Xeon processors, a 4-port 1Gbit Broadcom card, the integrated HP P410i SAS RAID controller, and 4 600GB 10k SAS drives.

The HP P410i does not support HBA mode or passthrough, so we must configure the RAID volumes using the HP Array Configuration Utility (provided on the bootable HP Smart Start ISO) rather than creating software RAID with FreeNAS.

The configuration for testing was RAID 1+0, which produced a 1.2TB volume with very good write and read speeds – while this was fine for the free version, I did need the extra capacity that only RAID 5 could provide. Initially I encountered very slow write performance, a paltry 10MB per second. This was suspiciously close to the maximum throughput for a 100Mbit network link, so I suspected that something was wrong with my network configuration.

My initial configuration for the FreeNAS box was using the Link Aggregation Protocol to group three 1Gbit ports on the box and switch, however as I discovered this is not supported for iSCSI. Link aggregation was only providing the bandwidth of a single 1Gbit link, but unfortunately was not the cause for the slow writes.

I discovered that the controller cache was incorrectly allocated only to reads by default in the ACU, which amplified the already terrible write penalty for RAID 5 in spite of having the drive cache and every other option enabled.

The solution to make the best use of the multiple network links is iSCSI MPIO, for VMWare ESXi and FreeNAS this can be accomplished by configuring each port on the FreeNAS box with it’s own subnet (example 172.16.1.x, 172.16.2.x, 172.16.3.x, all /24) and assigning them each to the iSCSI portal.

On the VMWare side, a matching VMKernel adapter for each subnet on each host is required, bound to their own physical Ethernet ports. Once complete, bind the new network ports to the iSCSI adapter, add the target IP addresses for the portal and rescan, then set the multipathing policy for the device to Round Robin.

All of these network and cache configuration changes were made with the Unitrends VM shut down without negatively affecting the Unitrends Backup storage volume. Once everything was powered back up a quick 2GB test backup succeeded in writing 30MB / second using approximately 150Mbit /second each on two of the 1Gbit Ethernet ports. A rather dramatic improvement.

Windows 7 freezes at ‘Shutting Down’ or while restarting

If your machine is freezing during shutdown, check your graphics drivers.

The particular system where I encountered this problem had an AMD Radeon GPU and Intel integrated graphics, an optional Windows Update had updated the AMD graphics driver.

There were no events in the shutdown performance event log to help in troubleshooting and the problem persisted in Safe Mode.

To test, roll back the driver or use the Microsoft basic display driver. If the machine shuts down successfully try getting the latest drivers from AMD and Intel.

https://downloadcenter.intel.com/

http://support.amd.com/en-us/download

Failed to get full path for string error 0x8007007b ERROR_INVALID_NAME

Description:

Windows Updates will not function, SFC reports that Windows Resource Protection could not perform the requested operation.

Related Errors (CBS.log) :

2016-11-26 08:44:15, Info CBS Starting TrustedInstaller initialization.
2016-11-26 08:44:15, Info CBS Failed to get full path for string: [HRESULT = 0x8007007b – ERROR_INVALID_NAME]
2016-11-26 08:44:15, Info CBS Failed to expand path from onine store: C:\Windows\winsxs\amd64_microsoft-windows-servicingstack_31bf3856ad364e35_6.1.7601.17592_none_672ce6c3de2cb17f\ [HRESULT = 0x8007007b – ERROR_INVALID_NAME]
2016-11-26 08:44:15, Info CBS Failed to find servicing stack directory in online store. [HRESULT = 0x8007007b – ERROR_INVALID_NAME]
2016-11-26 08:44:15, Info CBS Must be doing offline servicing, using stack version from: C:\Windows\winsxs\amd64_microsoft-windows-servicingstack_31bf3856ad364e35_6.1.7601.17592_none_672ce6c3de2cb17f\cbscore.dll
2016-11-26 08:44:15, Info CBS Loaded Servicing Stack v6.1.7601.17592 with Core: C:\Windows\winsxs\amd64_microsoft-windows-servicingstack_31bf3856ad364e35_6.1.7601.17592_none_672ce6c3de2cb17f\cbscore.dll

Solution:

Check what value(s) are present in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\Version
This key should contain a single value, it’s name will be the current version of the servicing stack components with the data providing the path to the servicing stack folder in WinSxS.

If more than one value is present, export the entire key to back it up and then remove all values but the most recent version. Verify that the version numbers in the value name and the servicing stack folder match and that the servicing stack folder exists.

VMWare Horizon View Linked Clones Stuck Customizing

Symptoms:

Linked clones from a parent VM snapshot finish provisioning but are not customized (machine name doesn’t change etc.).

Cause:

In my case, I had changed the parent VMs disk configuration several times before taking the first snapshot. I don’t see any notes in VMWare’s documentation about this, but it is the only thing I had done differently with this parent VM.

Resolution:

Delete the parent VM and create a new one. DO NOT change the disk configuration after the VM has been created.