Wednesday, 29 July 2020

It’s a local file cache - just not as you know it.


When you design a serverless school there’s the always the option to leave a little bit of local storage in the mix, just to be on the safe side but this is always a mistake.

To operate a local file server within a role based security model you need local accounts. Cloud directories do not understand Kerberos unless you reintroduce a local domain controller and Active Directory on yet another server. 

Once you’ve put Active Directory back into the mix and installed the device to run it on the temptation will be to solve any problem using the old techniques and before you know it you’ll have a rack of servers or, more likely, be suffering 'virtualisation creep'. Nothing has changed and you're back to square one.

The common accusation against a cloud first school is that you can’t access cloud data without some form of local storage or caching of files. When a class of 30 students opens a 10Gb media stored in the cloud everything will freeze as 300 Gb of data is pulled down a 100Mbs connection and two years ago that was probably true. Except now it doesn’t freeze because there is a local cache, just not the one you might expect.


In a cloud first school the local cache is distributed almost all the workstation and managed directly by the One Drive or the Google File Stream client. This creates a distributed, fault tolerant local cache with access to TB’s of local solid state storage and almost limitless CPU cycles all talking to a back-end that is moving data to and from the site using predictive on-demand technologies. 

One Drive supports delta level file updates across a wide range of file types including most graphics packages. A 90K update to a 10GB file creates 90k of traffic. The system has its own built in form of QoS, trickle feeding updates back to the cloud while making sure common files are received from the local cache.

Collaborative workflow is standard, as is file versioning and user on-demand recovery.  

If configured correctly the data never moves outside the school security boundary.  DLP policies and intelligent labelling and classification controls access based on content so that files are secured from any location and any platform. The school data protection strategy can be realised in an observable rule set applied to every device, personal or school owned.

Technically the distributed replication approach backed by DLP is so superior to a local file server it's like trying to compare a firework to a Falcon Heavy.  This is the model both Google and Microsoft are betting business on and trying to retro-fit centralised file syncing to the cloud goes against the technological direction for both companies

Distributed sync, cloud to device, no servers required is the way forward.


Friday, 1 May 2020

Win32 app lifecycle for Intune.


Microsoft's documentation on the format and deployment of Windows apps (Win32) within InTune is pretty comprehensive and is well supported by a number of technical blogs which take you through the packaging and the InTune Management Extension (IME) workflow

What is less well explained is what happens next.

Your V1 app has been marked as Required and deployed successfully but now the vendor has released V2. How do you get V2 onto the desktop ?

The new V2 app clearly requires repackaging to create an updated .intunewin payload and logic would suggest that if the V2 package replaces the old V1 version in the original InTune app definition the change will roll out to the desktops - but it doesn’t.

As far as InTune is concerned the V1 app is marked as installed for the device or the user. Simply uploading an updated .intunewin file doesn’t change that fact.  The only way to break the log jam is to convince InTune that the app isn’t installed anymore which forces a re-install and a subsequent upgrade.

The Win32 object has a number of ways to detect if an app is installed. Again these are well documented in other technical blogs but in summary it involves checking for files, folders or registry entries or a combination of all three. This works for the initial deployment because it’s a fair bet that if the startup executable can’t be found in the install path it’s probably not installed. However for an upgrade this approach cannot be relied on. Unless the process creates a new file / folder or updates a registry entry that you can check for, the logic will always return ‘installed’ and assume there is nothing to do.

Even if you can update the original app object and identify a feature to test for, you are not going to get much feedback on how the upgrade is progressing. The best that you can hope for is a report that tells you that 100 instances are installed and, at any time 100 instances are still installed. There’s no feedback on the roll-out process because the app only reports if it’s installed - which it is in all circumstances.

For this reason, best practice suggests creating a new Win32 object for each app version and retiring the old version by removing the assigned group or changing the status from Required to Available. This makes things nice and clean and gives you a good idea of how things are progressing but doesn’t solve the problem of triggering the install process in the first place.




Fortunately the Win32 object gives you the option of running a script instead of looking for files and folders which allows you to check the version of the application using the script below.


$ver = (Get-Command "<<< Path to the app.exe >>").FileVersionInfo.FileVersion
if ($ver -eq "<< Version Number to Test For >>") 
{
    Write-Host "Updated Version Installed"
}

The script must return zero in the exit code and write to STDOUT to signal that the application has been detected.

https://www.petervanderwoude.nl/post/working-with-custom-detection-rules-for-win32-apps/

This will force the update onto V1 machines and since the check is also run at the end of the process it’s a surefire way of ensuring the update has been a success.

Once you start scripting you can embed any logic you like but it’s best to keep it simple because once the code has been uploaded to Azure store there’s currently no method within the GUI to recover the script or even view the contents so this process has to be manually documented.

Clearly this is not an ideal situation and it’s likely that Microsoft has a roadmap to make this process easier, possibly by involving a version label or something similar. In the meantime it's worth giving some thought to how you intend to maintain Win32 apps before the initial install goes out.


Monday, 20 April 2020

Take a train ride to Azure.

For a while now Microsoft has been signalling it's intention to move towards role-based training in an attempt to test real world problem solving skills rather than the simple accumulation of facts around a specific platform or technology. This reorganization has resulted in the wholesale retirement of the old MCSx accreditation tracks which have formed the cornerstone of Microsoft training since Windows Server 3.5 launched in 1994.

The original announcement fixed the retirement date on June 30, 2020. In response to the current situation this has been extended to January 31, 2021 but this still places the cut-off within a nine month period. Any exam passed prior to the retirement date will stand for one year after the exam is retired but after that all current MCSx credentials will be stamped as inactive. From that point it’s over to the new role-based certification tracks.

Microsoft is well known for updating the training programs at regular intervals. Any network admin attempting to keep their CV up to date will know it’s pretty much a full time job so why is this change any different?




Well it’s down to the number of exams being retired and the wholesale shift to cloud technologies.

Consider this simple fact: there no longer an exam that explicitly tests for proficiency in Windows Server 2019 administration.

The official line is that

 “Windows Server 2019 content will be included in role-based certifications on an as-needed basis for certain job roles in Azure”.

The Windows Server admin exams were the cornerstone of the old MCSE but now they don’t even exist. As far as Microsoft is concerned Windows Server knowledge is still important but only as it applies to Azure cloud services.

Looking for the update to the SQL Server admin exam?  Much the same I’m afraid because you really should be using the Azure SQL Database as a PaaS.

The new Microsoft accreditation tracks are wholly and unashamedly focused on Azure and the associated cloud services such as Modern Management and Desktop Analytics. On-premise is is part of that but only as far as it supports Azure.

This change will feed into the partner channels who will need to rapidly re-skill before the cut-off date so it might be a good time to invest in training companies or get that training budget signed off.

For the traditional Microsoft IT administrator who expects to be cramming facts about Windows Server 2019 installation procedures, scaling limitations and hardware requirements it’s all going to look a little strange but the plan to sit tight and wait for the cloud to blow over is no longer an option.

There’s a general rule that if you want to get an insight into the future direction for any tech company - check out it’s training program.