Friday 13 December 2019

Cloud and multimedia - another hurdle falls.

For those schools considering a move to cloud storage there’s always been one hurdle that's been difficult to overcome - multimedia.

Whether it’s graphics, music, art or material design the nemesis of the serverless school has always been the manipulation of very large files by high end software packages normally running on Apple Mac’s or Windows workstations,

As previous posts have pointed out, using local storage is hardly an ideal solution. The ever increasing demand for storage has an impact on backup and disaster recovery plans and multimedia can prove a challenge to a mobile first, anywhere learning approach. Moving very large files across a network also requires a resilient high speed backbone but so long as you have the budget for fast switches, resilient storage, backup software and box full of data tapes, it’s a proven strategy.

One approach that can work well is to keep both the data and the processing in the cloud. The next generation of SaaS platforms may not match the advanced functionality of locally installed applications but they are ideally suited to the anywhere learning model and have proved very successful when matched with devices such as Chromebooks.

However, if the lesson plan requires branded specialist software which only runs on a high end workstation and generates files that are gigabytes in size, does that rule out cloud storage all together?

Well maybe not.



The technology behind synchronising cloud storage to local drives has come a long way in the last few years and has now reached a point where it can take on the challenge of large multimedia files. This objective was made clear at the recent Microsoft recent Ignite conference where some key updates to OneDrive were announced.

The popular cloud storage service is gaining support for larger file sizes, differential sync and new preview features.

Starting with the most obvious requirement, users can now upload files up to 100GB across a range of formats and are no longer limited to the MS Office standards.

Differential sync is one of the most user-requested capabilities for OneDrive and has been on the roadmap since 2017. This feature only transfers parts of files that have been changed which is clearly a huge advantage when working with large datasets. Currently OneDrive supports differential sync for the all the modern Office file formats but from December 2019 it will support all files stored in Microsoft 365.

Microsoft has written custom handlers for common image formats such as GIFs, JPEG and TIFF, audio/video media files such as MP4 as well as AutoCAD DWG which will substantially reduce the upload times as well as the consumed bandwidth. Now that differential sync can be linked to local caching this makes large files and cloud storage a practical combination for the first time.

One of the biggest advantages of cloud storage is the ability to share files between co-workers and external partners but this can be hindered by processes such as CAD/CAM  and image editors requiring their own viewing software. Fortunately the OneDrive web interface now comes with built-in preview tools which includes AutoCAD and support for 360-degree images. Microsoft will be releasing new viewers at regular intervals to further extend the current range of graphics handlers which includes  GIF, JPEG, JPG. JPE, MEF, MRW, NEF, NRW, ORF, PANO, PEF, PNG, SPM, TIF, TIFF, XBM, and XCF files.

Lastly, OneDrive's preview tools also offers basic editing features including the ability to add comments and annotate PDF files which is a useful plus for education.



Checking back on the blog history it’s been over five years since the first post appeared promoting the use of SaaS in education.

In 2014 a serverless deployment was possible but with certain technical challenges such as  the management of Microsoft desktops, print handling and third party integration with cloud directory services. Over the intervening period each one of these restrictions has been removed.  Now another hurdle falls with workable solution for manipulating large files directly from cloud storage.

It’s still not perfect but neither was placing a mini-datacenter in every school.

Saturday 30 November 2019

Working with simple passwords in Azure AD.


Microsoft Office 365 (Azure AD) has a default configuration that requires complex passwords that are updated on a regular basis. Complex passwords are 8 to 256 characters that combine at least three of the following: uppercase letters, lowercase letters, numbers and symbols. This level of security is standard practice in a world where everybody has a responsibility to protect their digital identify.

However managing this type of policy in schools can be difficult, especially for early year groups. Complex passwords and enforced password changes are not something a teacher wants to face first thing on a Monday morning.

A policy that uses simple passwords progressing to more complex passcodes for older pupils often works better. Ever since it’s launch Google G Suite for Education has been using eight character simple passwords with a non-expiry policy to protect over 70 million education users, so it's hardly an unproven approach.

Microsoft's support for simple passwords normally involves syncing or deferring to local Active Directory using mechanisms such as Azure AD Connect or Active Directory Federation Server (ADFS) but in a serverless world you don’t have these options, Azure AD is the only directory you have.

One approach is to turn off password complexity for all Azure accounts but that seems a bit drastic. So how do you work with simple passwords in this situation?


Schools that have adopted Microsoft's Modern Management framework can manage Azure user accounts through at least four different web interfaces:

Each platform has a different user interface, navigation model, supports a different sub-set of functions and works in subtly different ways. So the first task is to choose the right portal for the job.

If you plan to do a bulk upload of a new user group using a formatted CSV file then this feature is available in the Azure Portal and the Office 365 Admin portal but not InTune for Education. To add a little more complexity, the format of the import csv file is different and the options vary between the two portals.


Office 365 Admin - Users


As well as creating a user account, the wizard provided by Office 365 Admin gives you the option to specify a licence such as Office 365 A1 but neither the file format or the wizard provides any control over the password format.  All accounts are created with complex passwords. You are also limited to 200 accounts in each import run.

After the import you have an option to download a spreadsheet that lists the accounts created and the passwords supplied so you can inform the users. Don’t miss this step as there’s no way of regenerating the sheet.



If you are in this unfortunate position you can select multiple users in the Office 365 Admin - Users panel and then choose Reset Password which gives you the option of emailing a new password list to an admin account but the result is not a neatly structured spreadsheet that you can use in deployment.


Azure Portal - Azure Azure Directory.



An extended format of the import file includes an option for the initial password but sadly not a licence allocation. Even though you can specify the password it still has to be in a complex format otherwise the import fails with the error.

“The password in the uploaded file does not meet password complexity requirements.”

In summary neither option gives you the chance to bulk create accounts with simple passwords.

Removing password complexity for an account can be achieved using the powershell command listed below. Unfortunately none of the web interfaces gives you this option as a simple checkbox.

Set-MsolUser –UserPrincipalName <UPN of user account>  –StrongPasswordRequired $false

However even after removing the strong password requirement updating the password using the web GUI’s still fails. The Office 365 Admin - Users portal checks for a complex password on data entry and won’t let you move through the wizard and the Azure Portal - Azure Azure Directory forces a temporary password that you can’t control, which of course is complex.

Back to square one - well not quite.

Although the InTune for Education (IT4E) portal provides a limited set of update features one of these is Reset password and this does allow you to set a simple password if the complexity rules allow.



However this option is only available for schools using IT4E and even then it’s hardly practical for a bulk update. Ideally IT4E would provide a simple import feature offering specialised options such as licence allocation, simple passwords, control expiry and a switch to choose whether “change at first logon” is applied.

However until that time, powershell is the answer.

The commands below will update a user account to remove password complexity, set a simple password and ensure the user is not prompted to change at first logon.

Set-MsolUser –UserPrincipalName user@myschool.com –StrongPasswordRequired $false
Set-MsolUserPassword -ForceChangePassword $false -UserPrincipalName user@myschool.com -NewPassword "ABC123abc​"

Probably the best approach is to create the accounts using the Office 365 Admin - Users portal, which gives you the option to select a licence level and then run the commands above on each account to prepare for student use. If you are working with a whole year group is easier to create a CSV file with the user UPN and the password as a data pair and then run the commands as a batch job.


Install-Module MSOnline
Set-executionpolicy RemoteSigned
$credential = get-credential
Import-Module MSOnline
Connect-MsolService -Credential $credential

Import-Csv students.csv | ForEach-Object { 
Set-MsolUser –UserPrincipalName $_.UserPrincipalName –StrongPasswordRequired $false -PasswordNeverExpires $true
Set-MsolUserPassword -ForceChangePassword $false -UserPrincipalName $_.UserPrincipalName 
-NewPassword $_.Passcode
}

Example CSV File (students.csv)

UserPrincipalName, Passcode
user@myschool.com,ABC123abc​
user2@myschoolcom,123aaa321

The script also sets the password to never expire. Generally you don’t want students changing passwords 'en masse' during term time.



Any IT admin with experience of the simple yet powerful G Suite user import feature will probably find the multiple Microsoft options a bit confusing at first.  It would certainly make things a lot easier if the InTune for Education interface could offer some of the password management features mentioned above.

However the important point is you can get a simple password policy to work, it's just a shame it's so complex.

Sunday 27 October 2019

Chromebooks and Azure SSO revisited.


The post describing how to integrate Chromebook Single-Sign-On (SSO) with Microsoft Azure AD (Office 365) remains a popular topic . It's been a permanent fixture at the top of the "You liked these" list for quite a while now.

Unfortunately, although the process is essentially the same, the Azure interface has received a complete overhaul during this period. As a result, anybody trying to follow the post as a step by step guide to setting up SSO for Chromebook is likely to be a little confused.

Therefore it’s probably a good time to revisit the procedure and take the opportunity to list some tips and tricks. To Microsoft's credit the new interface is far slicker and easier to navigate so there really no excuse in delaying getting your Chromebooks working Azure AD directory.

Note: At the point of writing Microsoft are in the process of rolling out another update which is currently in preview.

In this example we’ll set up SSO for a fictitious school (MA Academy) that has a Google organisation on the primary domain maacademy.org. The Azure directory must also hold maacademy.org as a registered subdomain.

Setting up SSO requires access to the Azure portal for your Office 365 tenancy but once logged in there are no additional licensing requirements. In the Azure portal search for  "Enterprise” to bring up the Enterprise Applications blade and select + New Application.

This will take you to the Application Gallery where you have access to over 1000 templates for ready-to-use SaaS solutions. Google can be found by searching for “G Suite” and finding the application shown below.



Selecting the icon allows you to name your new application and then select Create. It can take a few minutes to build but once complete it will be displayed in the main list of all Enterprise Applications.

Note: If your organisation employs a large number of SaaS apps you may have to use the search option as the app list only displays the first 50.

Selecting the new app opens the Properties blade. Although there’s a lot of information here, most of the fields default to the correct values so there’s not much work left to be done. We’ll come back to this section later.

Next, select the option for Set up single Sign on and select the SAML card.


As you will notice only a few fields are marked as required,  in most cases the default values will suffice.

Two items that are required are Identifier (Entity ID) and Sign on URL.

The Entity ID is a URL passed by Google that identifies it to Azure. It’s possible to have many Google organisations linked to a single Azure tenancy. In this case each would have their own application with a unique Entity ID.

In most cases the format below works well for this purpose.

google.com/<your primary domain>

Example:   google.com/maacademy.org

It’s important to note that the domain MUST be the primary domain for the Google organisation even if you only plan to authenticate accounts in a secondary domain.

The other defaults can be accepted except for the Sign on URL which must be updated to include the G Suite primary domain and perhaps be pointed away from mail or drive as the signin target. In schools that are adopting Chromebooks as platforms for Office 365 is likely the Google Gmail and drive services will be disabled. In this case it makes sense to use the account info site as the sign on URL as this will be available for all active Google accounts.


https://www.google.com/a/<your primary domain>/ServiURLceLogin?continue=https://apps.google.com/user/hub




The default settings in the user Attributes and Claims section are suitable for most installations. One exception is for schools that don’t use the user MAIL attribute to store the G Suite logon identifier. Where user principal name attribute is used or some other custom field is employed  the emailaddress claim name will need updating otherwise you will face an “email invalid’ error on logon.


After this has been done you may need need to review the value of User assignment at the base of the Properties dialog.


If you select ‘Yes’ users must have the app explicitly allocated to them otherwise authentication will fail with the error “User is not allocated to this application".

The relationship can be made through group membership or by applying the app to individual accounts.  It’s sometimes useful to apply the app to a user account when you are in a testing phase however it’s likely a school will want all Chromebook users to be authenticated with Azure so this property is more commonly set to No.

That's pretty much it. All that remains is to pick up some information from the Set up G Suite Chromebook SSO card and head on over to the Google admin console.




You will need the data from the Login URL and Logout URL and the Certificate (base64) from the card above.

Fortunately the Google dialog for setting up SSO in the Security section of the admin console remains unchanged.





Paste the Login URL into Sign-in page URL,  Logout URL into Sign-out page URL and copy the data below into Change Password URL.

https://account.activedirectory.windowsazure.com/changepassword.aspx
`
As a final step upload the certificate file downloaded from the Azure portal, check the option for Use a domain specific issuer and save the dialog.

A word of warning here. The option to turn on SSO (Setup SSO with a third party identity provider) will apply to all accounts so it’s best to test it out first. The easiest way to do this is to limit the action of SSO to the IP address of your test Chromebook.  You will need to add the IP address into the Network masks field using the format below and substituting in your local value.

XXX.XXX.XXX.XXX/32

Example:   10.4.34.123/32

The device and user policies to enable the Chromebooks for SSO are well documented by Google and the previous blog but before you update these settings it’s a good idea to check the config by logging in through a Chrome browser session. Once you have that working you can setup your Chromebook.

One last tip. Don’t test this with a G Suite user that has admin rights as these do not respect any of the SSO settings by design.

Good luck with your Chromebook SSO project. Although keeping up to date with the user interface updates can be a bit of a chore the changes have resulted in an environment which is a lot easier to manage and configure.



Other Information:
Microsoft have a useful how-to document you can reference which goes into more detail and also covers user auto-provisioning.

Sunday 22 September 2019

Is the File Server EOL ?

Previous posts have examined the problem of incorporating traditional file shares into a serverless solution, exploring such issues as;

  • How do you share files when you don’t have a local server ?
  • How do you apply a cloud based security model to a platform that only understands local Active Directory?

Although several solutions have been proposed it turns out there’s a simple answer - don’t use network file shares.

Before we examine the alternative it’s important to appreciate how long the F: drive has been with us and how deeply ingrained it is into the IT psyche. For most users file sharing is what a network does. This may be the reason why we have failed to appreciate how completely unsuited it has become to modern work practices.

The standard Windows file share has a list of limitations that is both long and varied.

  • No easy offline sync or mobile access.
  • No file versioning.
  • No recycle bin
  • No retention policies
  • No event logging
  • Poor search functions.
  • Poor integration with cloud based user directories.
  • Primitive document sharing using simple file locking.
  • Requires a high availability, backup and disaster recovery plan.


And of course file sharing depends on a server that requires patching, licencing, upgrading and monitoring.

Most, if not all of these limitations can be eliminated by employing more software or additional hardware (which also requires patching, licencing, upgrading and monitoring) but in the end you’ll still be left with a complex second rate solution that’s not flexible enough to meet the needs of a mobile workforce. Let’s face it, the F: drive has had a good run but it’s time to search for an alternative. So how do you replace the file server in a modern workplace that has no servers.




The first clue was when Dropbox started to appear on work PC’s synchronizing cloud storage to the local drive. Since then this model has been refined and extended by both Microsoft and Google and now it’s ready for prime time.

In the future, Microsoft expects you to access files through Team channels which act as a front-end to Sharepoint document libraries. The Sharepoint engine provides all the elements missing from files shares including content search, a recycle bin and a comprehensive versioning journal. Event logging and retention policies are other quick win. The close integration with the Windows10 OneDrive client allows the user to control synchronisation to the local device.

Clearly the sharing capability, collaborative workflow and mobile integration is light years ahead of a simple file share but perhaps the breakthrough feature is the ability of the OneDrive client to expose the Team channel within File explorer.  In many ways the user experience is identical to using file shares and therefore it can provide a clear migration path for organizations that want to move to a severless solution without significant retraining around new applications and the use of a web interface.

Because Microsoft are using this framework as the foundation of M365 these capabilities are baked directly into Windows10. So why not take the opportunity to trial these features as part of your Windows10 migration and ditch those servers.

Coming from the Google world this will be very familiar - being completely cloud based G Suite users have been working this way from day one. Versioning, collaborative working, extended search and mobile integration have always been part of the package. Google Vault provides the retention, legal hold and e-safety features while the G Suite admin consoles gives you the capability to report on file access.

In addition Google File Stream mirrors the features of the OneDrive client exposing Google Drive in File Explorer as a mounted drive and allowing direct access from native applications like MS Word and Photoshop.

The obvious conclusion is that both the major payers are offering the same solution. You can have the Microsoft version or the Google version but it’s essentially the same vision.

You still have local files, they are just delivered in a different way.

Instead of one central device that needs managing, protecting and opening out to external access you have a central cloud store backed by elastic storage that replicates data down to each personal device regardless of the location and platform using intelligent algorithms that determine the precise requirements of the user.   Data can be protected, ring-fenced and be subject to centralised retention policies.

Contents of the files can now form a valuable data resource that can be subject to big data analysis rather than offering up a simple list of file names. Onsite backup and disaster recovery plans are a thing of the past. It’s a modern data management strategy.

It’s unlikely that file servers are going to disappear overnight. They’ll hang around in the same way that one-task physical servers persisted in the face the face of virtualization but in the end they’ll go because, like virtualization, the alternative is better.

Saturday 3 August 2019

Using Azure AD credentials with RDP.

As schools start to deploy InTune managed Windows10 desktop with an Azure based user directory a few interesting challenges emerge.

One of these is remote support of Azure joined Windows10 desktops using the RDP client. Using the default settings, the Azure AD (AAD) account credentials are rejected by the client dialog which stops the process in its tracks.

So how can you RDP into an Azure joined Windows10 device using your AAD user permissions?

The first job is to disable Network Level Authentication (NLA) for Remote Desktop Connection on the target Windows 10 computer.

Open System Properties and navigate to the Remote tab. Under Remote Desktop make sure Allow remote connections to this computer is enabled, and that Allow connections only from computers running Remote Desktop with Network Level Authentication is unchecked.



The second task is to edit the .rdp file you are using for the connection so it looks like this

full address:s:<ip address>:3389
enablecredsspsupport:i:0
authentication level:i:2

These settings disable any credentials being sent to the host computer.  As a result the host will be forced to present the logon screen rather trying to create the session prior to connection. This way you have the opportunity to present your AAD account details.

Now you will find that entering the AAD details in the logon box will open up a desktop.  This works fine with all accounts that have local admin rights but what if you try with a standard user account. You’ll find you get a rejection notice informing you that the user does not have the rights to a remote session which is expected as they are not a member of the Remote Desktop Users local group.

Opening up Local Users and Group MMC console on the console doesn't help you as there’s no way of selecting Azure AD as a source. Fortunately you can add users from Powershell.

Open up a powershell session with admin privileges and type.

net localgroup “Remote Desktop Users” /add "AZUREAD\JoeShmoe@yourdomain.com"

You should get a “Command successful” and the user account will be listed as a member in the Local Users and Group MMC console. The next RDP session will work as expected. The same technique works for any local group that needs to transfer Azure AD account rights.


Sunday 7 July 2019

Using Google to authenticate Microsoft (p2).


This is the second part of an extended post that describes how to use Google as an identity source for Microsoft Azure Active Directory (Office 365).


The next stage involves configuring Azure to pass authentication requests through to Google rather than handling them locally. Using technical terms we are creating a federation between the two platforms with Google acting as the identity provider (idP) and Azure the service provider. As before we’ll work through this example using theserverlessschool.net which is a custom domain in a Azure AD  tenancy.

Note: Each federation object acts on a single domain so it’s possible to have many custom domains within an Azure AD  tenancy, some federated to Google (or some other idP) and some locally authenticated.

All of the work is done from the powershell console so the first job is to install the modules listed below within a session running with local admin permission. You’ll also need details of an admin account for Azure AD.

Microsoft Online Services Sign-In Assistant for IT Professionals RTW
Microsoft Azure Active Directory Module for Windows Powershell.

The first module is a simple download. Once installed it’s loaded with the command.

PS C\:> Import-Module MSOnline

Then login to Azure AD using an admin account using the commands.

PS C\:> $Msolcred = Get-credential
PS C\:> Connect-MsolService -Credential $MsolCred

Creating the federation is a one line powershell command Set-MsolDomainAuthentication Unfortunately the number and complexity of the arguments turns it into a bit of a monster.

To get round this problem the input variables are read from an xml file and since most of the values are standardised you only need to update two elements before running the command.

The template xml file (dfs-pf-samlp.xml) can be downloaded from the original GitHub repository or from here.

Before using the file you need to replace the string GOOGLESAMLID with your domains unique idpid recorded when setting up SSO in G Suite. The input  line should be similar to this:

https://accounts.google.com/o/saml2/idp?idpid=C03idbhgle

Note that GOOGLESAMLID occurs three times in the file and needs to be replaced in each location.

Lastly you need to carefully copy the certificate string from the IDP metadata file saved from the SSO setup process and paste it into the location below.

<S N="SigningCertificate"> YOUR CERTIFICATE GOES HERE </S>

A shortened version will look like this.

<S N="SigningCertificate">MIIDdHCCAly…...BAkTD0dv</S>

Make sure you paste as a single line and do not introduce any line breaks or extra characters.

Once that’s done we can create the federation using the command below, substituting theserverlessschool.net for your domain. The command is shown with line breaks but needs to be issued as a single line command.


Set-MsolDomainAuthentication -DomainName "theserverlessschool.net"
 -FederationBrandName $wsfed.FederationBrandName
-Authentication Federated
-PassiveLogOnUri $wsfed.PassiveLogOnUri
-ActiveLogOnUri $wsfed.ActiveLogonUri
-SigningCertificate $wsfed.SigningCertificate
-IssuerUri $wsfed.IssuerUri
-LogOffUri $wsfed.LogOffUri
-PreferredAuthenticationProtocol "SAMLP"


A couple of other useful commands:


Get-MsolDomainFederationSettings -DomainName "theserverlessschool.net" | Format-List
    Shows the federation settings for the domain in the  console.


Get-MsolDomainFederationSettings -DomainName "theserverlessschool.net" | Export-Clixml dfs-pf-samlp.xml
   Dumps the settings back into the config file.


Set-MsolDomainAuthentication -DomainName "theserverlessschool.net"  -Authentication Managed

   This command switches the authentication back to locally managed and effectively turns federation off. This is particularly useful as a fall back option but it also allows you to update the settings as the values cannot be changed while the federation is active.

Azure will now pass all authentication requests to Google for any non-admin account in the domain “theserverlessschool.net”.  So what does that look like ?

Open up a Chrome session in incognito mode, navigate to https://office.com and select Sign In. When asked for a user account enter an Azure username from your federated domain. After a short pause you’ll be directed to the standard Google dialog which will grant access to office.com.

So what happens if you try and login to an Azure joined Windows 10 device with the same Google user account. You might expect the same behaviour but that’s not what you see. The Windows logon box remains and the Google password is refused. Nothing has changed. So why is that ?

By default the conversation between the Windows 10 device and Azure doesn’t use SAML but WS-Fed (WS-Federation), a protocol supported by IBM and Microsoft but not Google, therefore the hand-off falls at first hurdle.

What you need is a Windows logon process that understands SAML not WS-Fed and with Windows 10 1809 edition you have exactly that.

The feature is off by default but can be turned on using a custom policy in InTune.


OMA-URl
 ./Device/Vendor/MSFT/Policy/Config/Authentication/EnableWebSignIn    


After deployment you are offered an additional option on a globe icon - Web Sign-In.



Select this option and sign in using your federated account. This time you will see the redirection and you’ll be presented with the familiar  Google login dialog which will grant access to the Windows desktop.

You have to enter the logon account name twice because Azure needs to know the username before it can offer the redirection. In the future it will be slicker and remember this is still a preview feature.


Summary.
In an earlier blog post I predicted it would be a while before we saw a Windows device using a Google account to authenticate.

A year later, here we are.
This could be used as an example of how wrong I can be but I’d prefer to think it reflects the pace of change and the general move towards open standards.


The new framework is becoming clearer by the day.

Cloud based user directories controlling MDM managed devices that host SaaS applications delivered through a store model backed by subscription licencing.

Google, Microsoft take your pick - they both offer the same vision. You can even mix and match.

The only thing you can be sure of is the fact that not a single element will require a local server.


Thursday 4 July 2019

Using Google to authenticate Microsoft (p1).

In a previous series of posts we examined how you could defer Google logons to Azure Active Directory (Office 365).  In this scenario a user logging onto a chromebook would be presented with a logon dialog from Microsoft and any subsequent actions accessing SaaS resources would be checked against the Microsoft user database rather than Google.  Using specific terminology the Google database was using Azure as the identity provider (idP) and was federated to that service to allow single-sign-on (SSO). This setup is ideal for organizations that have established Azure directory system and use the service with other SaaS providers. In this case Google is just another SaaS client for the Azure identity service.

While this suits some enterprises it’s not always the best solution for a new school ‘Going Google’.

Consider a school that has made a big investment in chromebooks and uses G Suite and Classroom for learning but still requires a Microsoft platform for front desk administration. These admin users will need an Azure AD account to authenticate with their new Azure joined Windows 10 workstations. You could operate two separate directories but this is messy especially if you plan to link both directories to a MIS system. You could also fall back to the first approach, using Google to defer to Azure and maintain all the user accounts in Azure but his seems to be the wrong way round when the majority of the users access Google resources. It’s a bit like the tail wagging the dog.

What you really want to do is make Azure use Google as it’s idP and maintain all the accounts using the G Suite admin console. You can then setup user provisioning for just the administration team and link the MIS system to directly Google. Much simpler.

The secret to all of this is SAML (Security Assertion Markup Language) an open standard for exchanging authentication and authorization data between an identity provider and a service provider.

In this case Azure is the service provider and Google the identity provider. It’s always been technically possible to configure Azure as a service provider. There are a number of commercial platforms such as Ping Identity and Okta that depend on that fact. The problem with using Google as the idP is the procedure isn’t well documented or even widely deployed. The Google documentation explains the Google side but stops short at explaining how you prepare Azure which, not surprisingly, turns out to be the tricky bit.

In this two part post we’ll walk through the process step by step. The end game will be the administration and provisioning of Azure AD accounts via the G Suite console and controlling access to Office 365 using the Google accounts database. Finally we show a Windows 10 device presenting a Google logon screen on boot which is a bit mind-bender when you first see it.

It’s a bit a long haul but stick with it. It should be noted that information has been drawn from a number of sources with a few blanks filled in and a deeper explanation provided. There’s also quite a few prerequisites that need to be in place before that Google login box appears so if you are planning to try this out please read this first.

Lastly credit must go to Roger Nixon who had the nerve to test the setup at his site (Wheatley Park School) and then place it at the centre of a new school opening this summer. That means it's already in the wild  - so let’s get started.


Domains. Organisations and Tenancies.
If you’d like to try this out you will need an Office 365 tenancy, a Google G Suite organization and a spare DNS domain. This example uses theserverlessschool.net as a secondary domain on the Google organization and a custom domain added to a Office 365 tenancy. The status of the domain is not significant so long as both platforms have it verified. Both platforms must use the same domain and the same form of the user logon address.

A couple of points worth noting at this stage.

The action operates at the domain level within Azure so once it’s turned on all non-admin accounts using theserverlessschool.net are passed to Google for verification. You can’t isolate a sub-set of accounts for local authentication.  If you need Azure service accounts you can always fall back to another custom domain or the built-in .onmicrosoft.com domain.

You can only have a single instance of the SAML object for Office 365 in the Google admin console. Once the service is up and running it will accept requests from multiple Azure tenants so long as they have the correct authentication. However the associated user provisioning object only connects to a single AD tenant.  Therefore while GSuite could act as an idP for many Office 365 tenancies it can only provision users into one.  Therefore if you need auto-provisioning it’s best to keep to a simple one-to-one relationship between the Google organisation and the Office 365 tenancy.

Note: This is unlike the action of Microsoft Azure. Because you can create multiple instances of the provisioning object each with its own login details and scope functions you can auto-provision users into many different G Suite domains.


Setting up Google.
When a user attempts to logon to Microsoft the request has to be passed to Google for authentication.  Therefore there are two parts to the solution. The first involves setting up Google to handle the request from Microsoft and the second is making Microsoft hand the request to Google instead of just checking with the local Azure database. The first task is fairly straightforward, the second not so.


Creating the SAML object.
The process uses Security Assertion Markup Language (SAML) to handle the authentication request so the configuration is managed using the SAML apps section of the Apps Marketplace in the console.



This section contains templates that allow Google to accept authentication requests from a whole range of SaaS providers including Office 365.

Because most of the information is already filled in setting one up is pretty easy.

Enter Office 365 in  the filter input box and select the object returned.



The details are presented as shown in the example below. They will unique for your domain.



You need to do two things at this stage. Make a note of the idpid that is at the end of the SSO URL. It will be unique to your domain and we’ll need it later. Also download the IDP metadata as we require some information from that file.

Click through the next step.



In Step 4 of 5 check in the signed response as shown below. Keep all other entries as default.



In step 5 of 5 use the drop down list boxes to set the fields as shown below.

Step through the last dialog and select finish. That’s it. You will now find an Office 365 object in the SAML section of the apps that you can update and assign to users.



The Microsoft SAML app is assigned in the same way you manage any other app in the console by setting it to ON against elements of the OU tree.




This part of the process is quite important as it defines the applications ‘scope’ within G Suite. Once it’s assigned to an OU users will find the Office 365 icon appearing in their waffle menu in Chrome. It also defines the subset of users that Google will respond to when prompted by an SSO request from Office 365. In our original scenario we would assign the SAML app to the OU containing the user accounts for the administration team and be confident that no other users would be allocated the Office 365 icon or the SSO service.

User Auto-provisioning.
At this stage you are also given option to set-up user auto-provisioning. This feature will create a user account in Azure AD to match a new account in Google. There are also rules to delete or suspend an account that shadows the same action in Google.

I don’t intend to describe user provisioning in detail as the process hasn’t changed much since it was documented in an earlier post. A couple of points are worth noting.

First, the user account that is created in Azure has no licences attached. To make it productive you will need to allocate licences manually and update Azure groups as they are not part of the process either.

Second, auto-provisioning had a tendency to suspend Office 365 accounts if they weren’t recognized. Thankfully this annoying characteristic is now fixed. If you manually create a user account in Office 365 within the same domain the service doesn’t automatically suspend it just because it can’t find the same account in Google.  This feature made the earlier version almost unusable except for the most basic situations.

Lastly, if you choose to deploy user auto-provision you have the option to use a Google group to control the user set. This feature is very useful.




The scoping rule now extends to users who are under the OU structure AND members of the provisioning group. If the user is removed from either the group or the OU the deprovisioning rules apply. This gives the admin close control over which users have access to Office 365 and will be authenticated by Google. In our example we would probably create a Google group (Office 365 Users) that only contains members of the Admin team that need an Azure logon. That way you could create accounts under the Admin OU without creating a user account in Azure. Similarly you could suspend an Azure user by removing the account from the group while still keeping the Google account active. Nice.



Summary.
To conclude, we now have an Office 365 SAML app created in Google that will accept requests from Office 365 for authentication on any domain hosted by the organization. However Google will only return a positive response to a user account covered by the OU that has the Office 365 SAML app turned on AND is a member of the provisioning group.

Any new Google account placed in the OU and the provisioning group will automatically be created in Office 365 and suspended if removed. For a single account the update action seems to act on a back-end trigger rather than a schedule and can take place in under a minute.


What’s left to do.
We still have to persuade Azure to pass logon requests to Google rather than handling them locally which does involve a few tricks. After setting an InTune device policy and we should have a Windows 10 device displaying a Google logon screen.



We'll cover all of that in the follow-up post.

For those who can't wait here's a preview provided by Roger Nixon.

Monday 10 June 2019

Microsoft Licencing for a serverless school.

Going serverless challenges every preconception regarding networking and application delivery and that includes Microsoft licencing.  Working on the premise that nobody really understands this subject let’s try and keep things simple.

First we must assume that the adoption of Office 365 or G Suite for Education and associated SaaS applications has removed the requirements for SQL Server, Exchange and Remote Desktop Service (RDS).  It’s hard to believe that in 2019 a school would still be licencing a local Exchange server rather than the free Office 365 A1 tier for faculty and students. There’s really no excuse for this capital crime against the school finances.

However, while the school still runs Windows OS with local file and print services and MS Office 2019 installed on the desktop you still require a base set of licences.
  • A licence to install and update the Windows desktop OS.
  • A licence to install and upgrade the Microsoft Office suite.
  • A licence to install and upgrade the Windows Server OS.
  • Client Access Licences (CALS) for either devices or users to access resources on the server(s).
While it’s possible to licence each item individually most schools choose to purchase a ‘bundle’ under one of the Microsoft educational plans such as Open Value Subscription Agreement for Education Solutions (OVS-ES).

This approach has one major advantage. In addition to simplifying ordering, the bundled set carries a ‘student benefit’ option. This means that if the school buys licences for all staff members (defined as Full Time Employees or FTE’s) the same licence covers the student body as well.

Clearly this scheme has the potential for significant cost savings. In fact for any school operating a shared deployment of Windows devices, a Mac OS ICT suites running Office, a teaching and administrative department embedded with Microsoft Office and a multi-node Hyper-V Server farm it’s a deal not to be missed but what if you’ve gone serverless ?


.
In this situation the annual renewal cost for the bundle can look pretty steep, especially if the school needs to purchase a licence for each staff member regardless of whether they even have a network logon.  If the school derives value from the licence bundle that’s fine but with a serverless school that's unlikely to be true

For example the bundle includes a server user CAL which is not much use to a school that doesn’t run file and print services or host local Active Directory.

Similarly if pupils access a large shared pool of Windows 10 devices with Office 2019 installed the student benefit option makes financial sense but if they using Chromebooks or iPads to access G Suite or Office 365 web apps you're paying for something you don't need.

So what would Microsoft licensing look like for a serverless school that still needs to maintain a Windows 10 environment ?

This will vary from school to school but there are some pointers.

Manage Windows devices using InTune.
Schools need to move towards Windows 10 by January 2020 as support for Windows 7 depreciates.  Take this opportunity to place the new image under InTune control rather than falling back to a legacy process that requires local licensing. InTune offers a simple device licensing model which can factored into the price of the device. Alternatively InTune licences can be allocated to individual users.

Only licence what you need.
Why purchase Office 2019 licences for every staff (and student) member when only the SLT team actually need it to fulfill their duties. Office can be licenced as an add-on subscription to the free A1 tier and then allocated to those member of staff who need it. Everybody else can use the Office web apps which covers most situations.

If you ask a member of staff if they need Office 2019 the answer will be always be yes. If you then ask for a contribution towards the licencing costs you’ll probably find that, after due consideration the Office365 web apps will be fine after all.

Google has recently announced a new facility that allows the native editing of Office documents directly from G Drive. This may not meet the needs of the office macro ninja but it could reduce the need to licence Office for ad-hoc access and will certainly allow students to edit legacy document sets.

Make use of the new initiatives.
Microsoft is aware that the current EDU licence model is not ideal for the SaaS based school and is working hard to make it more attractive. One of these initiatives is the Shape the Future K-12 education program.

The major benefit of this scheme is the ability to order devices from Microsoft Partners with Windows 10 Pro preloaded. Units can then be manually enrolled into InTune without requiring an additional upgrade licence (Home -> Pro) or delivered direct to the user in a ready to run state through the AutoPilot scheme.

Schools with new hardware stock not covered by the scheme can still purchase an upgrade licence through OVS-ES but it’s far easier to absorb the discounted cost in the initial purchase price and enjoy the benefits of a more streamlined deployment process.

Run an appliance not a server
If your students and staff are taking resources from the cloud (including directory services) your local appliance should be reduced to running network support processes (DHCP, DNS). File storage will be managed using OneDrive/ Teams / Google Drive / Shared Drives while the print server role moves to a SaaS platform such as as Printix that integrates closely with both InTune and G Suite.

If you are running Windows Server you still need to licence the server cores but so long as your clients are not accessing local resources (application/file/print) there is no requirement for CALS. And of course you don't have to run Windows Server 2019 to provide basic network support.


Conclusion
Microsoft licencing becomes simple again. If you need a Windows device the total six year cost** is

Cost of device + Cost of the InTune device license + Cost of upgrade licence (if required).

       ** An InTune device licence is valid for six years.

A new staff member becomes

Cost of an annual Office Pro subscription licence (if required for role) otherwise use the free A1 Faculty.

The Office Pro licence, like the InTune user licence is transferable so you can work with a small free pool to make it more flexible.

Reevaluating Microsoft licensing in this way can bring substantial savings. There may be good operational reasons why this might not be immediately possible but in the future the serverless school shouldn’t be bundling up.

Friday 10 May 2019

Managed Chrome from the Cloud (p2)

A first look at the Chrome Cloud managed browser.


Once enrolled the Chrome browser turns up as fully managed object in the G Suite admin console and shares with users and chromebooks the ability to take policy based on a position in the organisational structure. Like Chromebooks, managed browsers get a dedicated section in the Chrome management dialog joining an ever increasing list of devices types.

Opening the new Managed Browsers section displays a page layout similar to Chromebooks but without the filter options.  Each machine hosting a Chrome creates an inventory entry with the Machine name as the key.



Selecting the Machine Name open a dialog that presents a wealth of information about the browser and the device hosting the browser.



If your dialog looks a little empty it might be because the data collection feature needs to be enabled for the device. Under the User and browser setting for the OU controlling the browser object you need to set Cloud Reporting to Enable managed browser cloud reporting.



This policy pushes out a small Chrome extension that handles the data collection and reports back to the console. Once enabled you can see the extension load on the browser.



There’s little point listing all the reporting elements as they are well documented by Google and fairly obvious but some of the actions are worth a mention.

In the Installed apps and extensions card you get the option to select an element that has been reported and then Block the object from all other browser.




The option allows you to select the root OU object for the action and remove the extension from all browsers in scope, blocking all future installs. This may be of limited use in schools where the policy  is restrictive by default but for organizations with a loser structure this could be a useful feature.


The machine policy section gives a centralised view of the information you would see on the local chrome policy page which is very useful. As you might expect CloudManagementEnrollmentToken shows up as the only Local Machine Policy but for other policies the status flag seems a bit misleading. The fault code is

“More than one source is present for this policy, but the values are the same” 

which is hardly surprising as most of the policies will have two sources, one taken from the user OU and one from the browser OU. Since the policy is actually applied it hardly seems to merit a large red exclamation mark and an Invalid status.

User Profile policies are broken out in the section below and list those policies that override the browser settings by being Locally applied for the user object. If you don’t see this section it’s because you have no valid user policies set.



Policy information is updated by a reboot of the device but there doesn’t seem to be a way to control it directly through the extension directory.

Browser Extension List.
For the curious admins that scroll all the way down to the bottom of the Chrome Management dialog they will be rewarded with a new option Browser extension list.

This allows the admin to view an aggregated list of all Chrome browser extensions organised by extension rather than device.



Apart from selecting the extension name and moving directly to the Chrome Store page there’s also the same Block / Force install feature that you can access from the action menu at the end of each entry.

What might not be so obvious is the fact that the data items themselves are selectable.




This action creates a pull-out from the right hand side that contains extended information including the rights granted to the extension and the extension ID:

This seems to be a new navigation type in the console that we might see in other locations in the future.

Saturday 27 April 2019

Managed Chrome from the Cloud (p1)

A first look at the Chrome Cloud managed browser.


In a previous series of posts we looked at how you could manage Chrome and other Google products such as File Stream using Microsoft InTune in a school that has no servers.

At Next 19 Google announced a cloud managed browser capability which makes the whole process much simpler.  The two approaches are not mutually exclusive and using both together can bring dividends to the network admin.

In this post we’ll take a look at what the new managed browser feature brings to the table, show how the technology integrates with InTune and then follow up with a more in-depth look at the G Suite management console in a later post.



The introduction of a managed Chrome browser is a big event for the Google administrator.

Up until now the G Suite organizational structure could only contain two types of object, a user or a chromebook. Now it has a third - a instance of Chrome running on a Windows, MacOS or Linux platform The Chrome browser has finally taken its rightful place as a fully managed object within the console.

In the past it was possible to manage Chrome on the desktop but it always involved a third party system such as Microsoft Active Directory, Managed Preferences on Mac or, as we have seen, an MDM like InTune.

This approach has two major weaknesses.

First, there isn’t a unified policy set for the Chrome browser. For organisations adopting Chromebooks the user environment is governed through the G Suite admin console while the Windows/MacOS Chrome desktop experience is controlled through an entirely different mechanism.

Second, there isn’t a single reporting console for Chrome. The G Suite admin console can access Chromebook information but it’s completely blind to Chrome browser metrics. Tasks like examining what extensions are installed or reporting on the versions of Chrome deployed across various platforms was simply not possible.

For schools this might be an irritation but for enterprise it can be a deal breaker so Google have solved the problem in a pretty elegant fashion. There are no licencing implications and the feature is free across all recent G Suite offerings. It’s also very easy to get started and there are only a couple of prerequisites.
  • The admin console must have the feature pushed out.  More than likely you already have the update. If your Chrome management tab reads User and browser instead of just User and you can see the new group icon shown above you’re good to go.
  • You must have Chrome V73 or later running on Windows, Mac and Linux computers. Android and iOS are not currently supported.

Preparing the Console.
Managed Chrome needs to be turned ON as it’s OFF by default. It’s important to realise that none of these actions will affect Chromebooks. User-level policies apply to Chrome devices even if Managed Chrome is turned off. Also you can turn this on at the root without any immediate effect as the browser has to be enrolled before anything happens.

Note: If you’d like to play around first, create an OU  called “Managed Browsers” and set it here.

From the G Suite admin console Home page, go to Device management and then Chrome management.  Click on User and browser settings.

Go to the Chrome Management for Signed-in Users section and change the setting to Apply all user policies when users sign into Chrome.



This policy is little misleading. It should really read Chrome Management for Signed-in users and managed browsers. As we'll see quite a few of the console policies descriptions need to be updated to reflect the new responsibility.


Preparing Desktop Chrome.
Just like Chromebooks, desktop browsers have to be enrolled into G Suite and to do that you need to create an enrollment key.

From the Admin console home page, go to Device management and select Managed Browsers.

At the bottom, click  +  to generate an enrollment token.



A couple of important points to note about the key.

It’s only useful at the point of enrollment and has no function after the event. Once the key is presented to the browser (we’ll come to that later) and the browser taken under management the key can be deleted or updated without affecting devices already enrolled. In some respect it works like the joiners code in Google Classroom.

The generated key is specific to the OU level. Any chromebook enrolled with this key will be placed at the same point in the tree and take policy from that level. In this case the analogy would be the ‘follow the user’ feature for chromebook enrollment.

Note: If you are just testing at this stage create the key for the new “Managed Browser” OU and set your policies here.

A browser under management will display a new information tag at the base of the About dialog.





Managed Chrome Browser Policies.

For organisations running Chromebooks it’s worth noting that none of the policies under the Device section have any bearing on Managed Chrome. Device policies are strictly limited to Chromebooks, no red line has been crossed.  All policies relating to Managed Chrome are to be found in the User and browser section, hence the change of name.

So how does Managed Chrome differ from a simple synchronised Chrome user session ?

Put simply, once enrolled a user policy can be applied to the Chrome browser without the user having to authenticate with Google. The policies are set in the Chrome User and Browser OU that the browser object lives in.

If user attempts to access a resource such as GMail they will be presented with a logon box in the normal way but at this point the browser is already under cloud user policy control so you’re able define things such as valid logon domains - but not using the policy you might expect.
.
The domain filter for Chromebook authentication is set in the Device policy but as we know Device policies have nothing to do with managed browsers. Therefore the policy you need to set is User and Browser - Sign-in Within the Browser in the OU that the managed Chrome browser resides in.



Using the settings shown above the user will only be able to authenticate using a stmichaels.org.uk account, the value being controlled by a Google cloud policy and not local platform policy.

Note: Clearly the description of this policy isn’t entirely accurate and neither is the little light bulb which says the policy only applies to Chrome devices.  Some housekeeping is required on the console to bring everything into line.

Policy Precedence.
You now have three policy sets for the Chrome user experience. Using Google's own terminology they are ;
  • Platform Policy - These are the old policies pushed out locally through GPO or MDM.
  • Cloud Policy - Managed Browser.  User and Browser set at the browser OU.
  • Cloud Policy - Authenticated User. User and Browser set at the user OU

So what happens if you set the same policy in all three areas - which one applies ?

The rule is (pretty) simple.

The Cloud Policy - Managed Browser policy will always apply except for two situations.
  1. A policy set at the Platform level will trump any Cloud Policy.
  2. Any policy explicitly set at the User level and not explicitly set at Managed Browser level will be applied.
In plain English what happens is this.

The browser takes the full user policy set from the Cloud Policy - Managed Browser OU with the understanding that any Local Platform Policies will overwrite individual settings.

When a user logs onto Chrome any policies that are locally applied on the user OU will override policies inherited at the Managed Browser level so long as they don’t clash with a Local Platform Policy which always wins out.

If a Cloud and Managed Browser policy is Locally applied in both areas the Managed Browser version is used. At no stage are list based policies merged.

You can clearly see the effect of the policy interactions if you run chrome://policy




First thing to note is the enrollment token is now shown as part of the machine policy. The only platform policy is the one that applies the Cloud Management Enrollment Key, all the others are set directly from the cloud from the User and browser policy of the OU that holds the browser object.

The only exception is one policy that has come down as part of the user sync process because it’s set as ‘Locally applied’ in the user OU but only ‘Inherited’ in browser OU and so it has precedence.


Enrolling a Chrome browser.
As mentioned earlier enrolling a Chrome browser is very straightforward - you just need to present the enrollment key either as a platform policy or through the execution of a .reg file.

In our serverless school the platform policy will be delivered by InTune so let’s see how easy this is.

Note: If need you need a refresher on using InTune to push policies to Chrome please refer to the earlier posts.

Managing Chrome on Windows is now reduced to the deployment of a single platform policy which tells Chrome to look to your Google organization for all other settings.


In this case the OMA-URI value is;

./Device/Vendor/MSFT/Policy/Config/Chrome~Policy~googlechrome/CloudManagementEnrollmentToken

with the value set to ;

<enabled/> <data id="CloudManagementEnrollmentToken" value="1f5e6ab4-3ebe-4e0c-b959-86a3eeb4e0c"/>

Obviously you need you insert your own value for the Enrollment Token but after that you can manage Chrome using G Suite using the same policy set that controls Chromebooks.

Now you have Chrome on a Windows 10 desktop, fully managed from the cloud using an unified policy set within G Suite. Cool or what!

Next we’ll take a closer look at the features of the admin console that relate to managed browsers and see how the story gets even better for the serverless school.


Addendum

Taking feedback from other early users (Kim Nilsson and Roger Nixon) I've checked back on the policy application logic and there's one point that needs clarifying.

As described the process does indeed 'loopback' on the user policy looking for the Locally Applied status but only if the policy for the User OU has

Chrome Management for Signed in Users set to Apply all user policies when users sign into Chrome.



This has to be set at the browser and user account OU.  If not you only ever see browser policies.

>>>  Managed Chrome from the Cloud (p2)





Friday 12 April 2019

Managing Chrome in a serverless school (p2)

Deploying Google File Stream.


Anybody following this sequence of posts and expecting a simple deployment procedure along the lines of the Chrome browser is in for a bit of a surprise.

Unfortunately Intune for Education only recognises three native deployment types (web link, Windows Store item and packaged msi) and Google Drive Stream doesn’t fall into any of those categories so it’s time to roll up your sleeves and log into the Azure InTune portal.

The Azure InTune portal supports a wider range of application types including generic Win32 apps but these need to be converted to a specialised file format (.intunewin) before they can be used. As it turns out converting the Drive Stream executable into a .intunewin file is very straightforward.

The first step is to use the Microsoft Intune Win32 App Packaging Tool to pre-process the Google File Stream executable. The packaging tool wraps the application installation files into the .intunewin format and detects the parameters required by Intune to determine the application installation state.

Although this post references File Stream the process is much the same for any .exe installer.

Download the packaging tool and extract the executable (IntuneWinAppUtil.exe) onto a local subdirectory  - C:\Intune for example.

Download and copy the Google File stream exe installer file into the same directory.

Open a Command Prompt as administrator, navigate to C:\InTune, run IntuneWinAppUtil.exe and provide the following information when requested.

Please specify the source folder: C:\InTune
Please specify the setup file: googledrivefilestream.exe
Please specify the output folder: C:\InTune

Once the process is complete a file named googledrivefilestream.intunewin is created alongside the original file. This is your new “InTune friendly” install package for File Stream.

In Azure InTune navigate to Client Apps - Apps  - Add.



From the App Type dropdown select Windows app (Win32).



In the next section select the googledrivefilestream.intunewin file you have just created. In the App Information section fill out the relevant fields. Load up a logo file, it has no purpose but it looks nice in the console.


The Program section requires an install and uninstall command. Fortunately these are listed on the Google support site.

GoogleDriveFSSetup --silent --desktop_shortcut

%PROGRAMFILES%\Google\Drive File Stream\<VERSION>\uninstall.exe --silent --force_stop


In the requirements section check in both 16 bit and 32 bit operating system architectures and a minimum OS version of 1607 or whatever suits your deployment plan.



The detection rule is the only tricky thing left. This is the logic that InTune uses to determine if the software is installed. In this case we’ll look for a registry entry.

Select Rule Format - Manually Configure... and then Add.

Rule Type: Registry

Key Path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{6BBAE539-2232-434A-A4E5-9A33560C6283}

Value Name: <blank>

Detection Method: Key Exists



The Tag section is not required. Select Save when complete and the installer file will be uploaded and the application prepared for deployment.




This normally takes a few minutes after which Google File Stream will be presented in the list of applications (type Win32) to be allocated in the same manner as the other apps.



It's important to realise that this process can only be managed using Azure InTune. Even after the app is created don’t expect it to appear within the InTune for Education portal.

In the next post we'll take a look at the new Chrome Managed Browser feature in G Suite and explain why things are about to get a whole lot easier for the serverless school.

Acknowledgements to Roger Nixon at Wheatley Park School UK for working through this example.