Saturday 1 December 2018

InTune for Chromebook Admins (p1).

Enrolling Devices.

One of the advantages of deploying Chromebooks into schools is the ease of management. With no dependency on local infrastructure and a simple to use web management console, installing and configuring Chromebooks couldn’t be more straightforward.

Wouldn’t it be great if your Windows devices worked in the same way? Well with Microsoft’s new SaaS based framework maybe they can.

In this series of posts I take a look at Windows Modern Management from the point of view of the Chromebook administrator and just to make it interesting lets jumble things up a bit and assume Chromebooks are the established technology.

In this alternative reality Microsoft Windows is the new kid on the block and you have the job of incorporating this feisty newcomer into your serverless SaaS based school?  So let's take the red pill and wander around this particular wonderland just to see how far we can go.


Windows Version.
Fortunately our imaginary school has invested in a batch of Windows 10 laptops all running the 1809 feature update. You can try this trick on your IT suite running Windows 8 but I suspect you won’t get very far. After all you wouldn’t expect a set of Chromebooks running version 55 to support all the functions of admin console and Windows is no different.  So the first step is to make sure you have the latest Windows 10 release.


User Accounts.
Currently Windows devices only authenticate through a Microsoft user directory. So as much as the Chromebook admin would like to see a Google dialog logon on boot that’s not going to happen soon.

Note: Since this post was written this has become possible. Details are given here.

 For this reason all your users will need a Windows Azure account to control the access rights and licensing. This really shouldn’t be a surprise because all SaaS services work by referencing some form of local account even if authentication is managed by another party. Although you can force Chromebooks to use Azure AD as an authentication source you still have to maintain a separate Google user account to apply policy.

Web Management.
Google provides a web based management console that controls all aspects of the User and Device policy for Chromebooks. Microsoft's counterpart is the InTune for Education portal.  The first thing any G Suite administrator will notice moving into Microsoft’s wonderland is the fact that many of the  management functions are split between various web portals, all having different navigation styles and UI layouts.

The InTune for Education portal is itself a simplified ‘skin’ that rest on top of the InTune blade in the Azure console.  In this walkthrough we’ll stick with InTune for Education unless we’re forced out for some function. Right on cue the first one of these is user and licence assignment which is not managed by InTune for Education but the Office365 portal.

For the records the Windows Administrator will be spending most of the time moving between the following portals.

Office 365 - User management and Licencing
InTune for Education - General Device Management
Azure Portal - Advanced Device Management
Microsoft Store for Education - Application selection and authorization.


Licencing.
Licencing Chromebooks is easy. It’s a one-off device licence that’s valid for the life of the Chromebook. This simple relationship has not been lost on Microsoft and they have a direct equivalent, an InTune device licence that lasts for five years. The cost  is roughly equivalent and it gives you the ability to apply policy to the device and any organisational user account logging onto the same PC/laptop. It’s a very good deal and highly recommended.

Unfortunately that’s the end of the good news because although the device licence exists and can be purchased there’s no way of applying it to a device. It’s almost like the licence has been released to meet a marketing need before the software exists to support it. No doubt this will be fixed in time but currently the failback is to apply inTune license to individual user accounts and that’s done though the Office365 portal.


Enrolment and DEM accounts..
We now have user accounts set up in Azure and InTune licences applied to those accounts so lets start enrolling some devices. Google admins will be familiar with the fact that enrolling Chromebook out of the box is pretty easy and an small cottage industry has grown up to support the mass deployment of Chromebooks. In comparison the InTune enrolment experience takes a bit longer but on the whole is pretty slick and surprisingly straightforward.

Microsoft have a new facility called AutoPilot which we don’t cover here. It’s the equivalent of a Chromebook white-glove service working through the partner channels.

The first thing you will need is a device enrolment manager (DEM) account. Unlike G Suite where any user account can enroll a Chromebook, you need a user account with a special deployment flag set to enroll devices in bulk. Once created and with an InTune user licence applied you can enroll up to one thousand mobile devices using a single DEM account. The InTune for Education portal provides simple dialog to create and manage Azure enrolment accounts.



Once we have our DEM account created we are ready to enroll.

Power on your new Windows 10 device and move through the OOBE inputs. Set any dialogs regarding language and network access and select Set up for an organisation which is the the Windows equivalent of Ctrl -Alt-E.



Signing in using the DEM account adds the device to the Azure directory and places it under InTune management. Once rebooted you can logon using any organisational account with both device and user policies applied. Pretty simple.

Opening the InTune for Education portal you’ll see the appliance listed in the All Devices section along with some basic system information. The managed by field should read MDM.



The check in time records the last time the device took policy from Azure. A sync can be forced at any time and is a useful way of getting changes out to the devices on a short schedule. The time taken to apply a policy update can vary from seconds to a long coffee break so the ability to force a sync is a useful tool.



Opening the device record displays further information and options that would be familiar  to the the Chromebook manager.  The Retire action is the equivalent of deprovisioning in Chrome with one major difference, the licence is returned to the pool.  The admin also has the option to force a restart of the device, wipe the PC of personal data and return to factory default settings - a sort of remote Esc-Refresh-Power.



Hidden under the More button are actions to force a virus scan of the device and update Windows defender, duties that the Chromebook admin doesn't have worry about but essential tasks for the Windows admin.



The device action panel records the status and result of all active tasks.

Now we have the device enrolled we can take a look at policy management in the next post.

Next: InTune for Chromebook Admins (p2).

Also: Windows Delivery Optimisation for the G Suite admin.



Thursday 1 November 2018

Managing G Suite & Office 365 in a MAT (p3).

Single Sign On (SSO).

In the last in a series of posts that outline a cloud based approach to managing Google G Suite and Microsoft Office365 in a multi-domain setup we take a closer look at single sign on (SSO).

Before we go into details it’s worth outlining the project objectives with respect to SSO.

The Trust uses Office365/Azure AD to deliver a unified user account database across forty schools. Each school has its own DNS domain namespace and local AD accounts are synced to Azure AD from each site using the Azure Active Directory Connect tool.
Some of the larger schools also operate Active Directory Federation Service (ADFS) onsite. This allows a user to authenticate directly with local AD and then access Office365 without being prompted for a second set of credentials.
The objective is to extend SSO from Google to Azure to support the adoption of Chromebooks and related G Suite services across all of the schools in the Trust without any reliance on local ADFS service.


Unlike user provisioning, implementing SSO in such a complex set-up can break things and make users very unhappy.

For this reason it’s important to run a small Proof-of-Concept (PoC) to test the configuration before going live on thousands of accounts.  The situation is further complicated by the fact that SSO can only be turned on or off for a single Google organisation which means that testing affects every account in that domain, not just a subset of users. This is bad for one school but seriously bad if your Google organisation holds forty schools.

Fortunately the Trust had access to a G Suite organisation that was marked for deletion so this provided an ideal environment to run a PoC.

Creating the SSO object in Azure proved fairly simple. Unlike user provisioning which requires an object for each domain, SSO is covered by a single item which takes about ten minutes to configure. Once it’s built you have an ADFS instance in the cloud that’s configured to handle Google authentication requests for all your Azure AD accounts. Fortunately there’s little risk at this stage because, until the Google domain is redirected to the new service everything works as normal.

This is where the spare domain comes in useful.  By setting up the G Suite PoC domain to point to the new Azure service you create a environment that can be used to prove the whole solution, including the Chromebooks if you have a few spare spare management licences.

Initially it’s worth testing using a standard Chrome browser session. Opening https://drive.google.com with a PoC organisation account should present an Azure logon page to complete the authentication. Similarly logging out and requesting a password change should direct to appropriate Azure web form.

Once you have this working you can update the Chrome device policy to pass-though the logon request to Azure. This is process is well documented by Google was well as the earlier post.

At this point the power of a cloud based federation service approach becomes apparent.  The endpoint will authenticate any G Suite account against Azure if there's an matching account in the tenancy.  Interestingly this even works if the school runs local ADFS.  The request is accepted by Azure and then passed through to the local ADFS with Azure acting a bit like  a request router.

During the PoC phase one unexpected problem emerged that's worth closer examination.

Chromebooks require Forms based Authentication (FBA) when they talk to a federation service, after all it’s the ‘form’ that provides the logon dialog box.

When connecting directly to Azure that’s not a problem but local ADFS instances can be configured to replace FBA with Windows Integrated Authentication (WIA).   WIA provides a much smoother experience for internal users moving between Office 365 and local resources.

A standard ADFS configuration does not list Chrome as one the the browsers that support WIA (even though it does) so it’s fairly common for ADFS administrators to add Chrome to the list of supported User Agents so that desktop Chrome users can have the same slick WIA experience. Unfortunately this action makes a Chromebook very unhappy. After failing to respond to the WIA request it throws up a default form in order to capture the user logon information which looks and behaves in a very ugly manner.

Fortunately the fix is fairly  simple. The User Agent that is added to ADFS has to be altered slightly from “Mozilla/5.0” to "Mozilla/5.0 (Windows NT”.

The second version only specifies Chrome running on Windows as having WIA support and therefore ADFS defaults to sending a form to the Chromebook and order is restored. The fix is documented in more detail in this code snippet,

Conclusion.
The Trust has been running with this configuration from the beginning of the new school year in September 2018.

The Azure tenancy and Google organisation are linked using the SSO object with all G Suite logon requests handled by Azure or routed back to the local ADFS instance. Google accounts are auto-provisioned  by Azure without running Google Cloud Directory Sync on each site.  The Trust is slowly developing the capability of the provisioning script to to add more intelligence into account management. Students now use Chromebooks to access the Office 365 webapps along with a number of other SaaS services.

If you're interested in trying this out yourself, so long as you have access to a test Google Organisation to allow you to make mistakes without affecting production users you can’t go far wrong.  Contact me for any further information.

Lastly many thanks to Sumit Tank for handling this project with the Trust IT team and having the courage to believe that might even work.

Tuesday 2 October 2018

Managing G Suite & Office 365 in a MAT (p2).

User Provisioning.


An earlier post outlined an approach to managing both Google G Suite and Microsoft Office365 in the type of multi-domain setup commonly found as part of a MAT (Multi Academy Trust).

The objective was to a create a single organization/tenancy for both G Suite and Office365 in which each school existed as a sub-organization/domain beneath the parent body. In this situation Azure AD acts as the authentication authority and the primary directory for the user accounts. G Suite provides the collaborative workflow for the teaching teams (Classroom) and the device management platform for both G Suite and Office365 (Chromebooks).

In the first of two posts we’ll take a closer look at the technical challenges of this technique, starting with user provisioning and finishing with single sign on (SS0).



In a standard approach the role of user provisioning between Active Directory and Google G Suite would normally be handled by multiple local instances of Google Cloud Directory Sync (GCDS). However since the MAT already has a single unified user directory in Office365/Azure AD it makes far more sense to sync directly from that source.

Fortunately Azure provides user provisioning as a function of the Enterprise Application SaaS connector for Google G Suite, the details of which are described in the earlier post. However scaling up to a MAT deployment with over forty domains requires a few tricks. So here are a few of the lessons learnt during the process.

Sorting out DNS and installing GAM.
To provision users into G Suite each Office365 domain must be created as a sub-domain in the G Suite organisation. Although this isn’t particularly difficult task, it requires administrative rights on each DNS account to support the verification process, so now is a good time to check that you have that level of access. This task always throws up a few surprises.

The GAM toolset proved particularly useful when creating  the verification codes and running updates using a command similar to the one below.

             gam update verify greyfriarsschool.org txt                 

GAM was also employed later in the project, so installing the toolset against the target G Suite organization proved a key requirement.

Azure provisioning uses a strict mapping of the user ID in Office365  to the same user account in G Suite. There is no way of providing a transformation such that user@schoolA.com relates to user@schoolB.com.  If for any reason the accounts don’t map across both platforms this has to be fixed first.


Creating the provisioning objects.
Each G Suite sub-domain is controlled by its own provisioning object in Azure so the end solution has many provisioning objects but only a single object controlling SSO. Since creating the provisioning objects is a manual task it’s a good idea to use a naming schema so you can match each one to the target domain. The provisioning objects can be created at an early stage and then disabled.

At this point you need to decide how to identify or ‘scope’ the user objects in Azure. In our situation we used a new Azure group for each school -  “Google Users-Domain A” for example, which was referenced by the domains provisioning object.  Moving an Azure user into this group has the effect of creating a Google account, removing it suspends the account. Testing is very easy as it’s filtered through the group membership so there’s little risk of flooding your Google domain with thousands of false accounts. Simply enable the object and add a test account into the scoping group.

The Azure provisioning object requires a user with admin rights to be created in the target G Suite organization to allow it to manage the accounts. Whether you use a single account or one specific to each each sub-domain is largely a data security decision. Using one for each site requires somebody to maintain a password list as there no service account (OAuth) option for this process.

Account post-processing.
One feature that becomes immediately obvious when using Azure AD is the fact that the cloud directory doesn't have a hierarchical structure. Local AD accounts exist in a directory ‘tree’ which can be mapped to a similar OU tree in G Suite which controls policy.  Azure AD doesn’t have this facility so all G Suite accounts are created in root. While this situation might be manageable for a small school, a MAT with over forty domains has the capacity to dump thousands of accounts into the G Suite root container  - which is not a pretty sight.

The solution used a post-processing batch job to search for new accounts in the root container. Once identified the script reads the domain information from the logon account name and moves it to a “New Accounts” holding area under each sub-domain.  The process was written as a powershell script with embedded GAM commands and hosted on a local server although there’s no reason why it couldn't be moved to a Azure server instance making it location independent.

During testing it was clear that the script could be extended to support a more complex set of processing rules. The final production script checks back into Azure AD to pick up additional data which allows it to place student accounts into the correct sub-OU for the year group.

The script needs to be manually updated as new schools come online but this is far simpler process than integrating another instance of GCSD.

A generic example of the provisioning script without the Azure integration can be can be found here.

Next we’ll take a look at Single Sign On.

Tuesday 18 September 2018

Managing G Suite & Office365 in a MAT (p1)

A cloud based approach to Single Sign On and user provisioning.


An IT setup that combines both Microsoft Office365 and Google G Suite is more common than you might expect, especially in UK schools. Managing both platforms is fairly straightforward for a single site but things start to get interesting when you try to incorporate multiple schools, all feeding back into a central organization - a situation commonly found within regional districts and trusts.

In a series of posts we’ll examine one approach to management based on a recent engagement with a Multi-Academy Trust (MAT) that planned to incorporate Google G Suite across a number of schools currently using Microsoft Office365. The goal was not to replace Microsoft Office365 but to provide each school with open access to both services. The trust was looking to manage both Office365 and G Suite as a single tenancy/organisation with each school existing as independent policy unit (sub-organisation).

As a prime objective the trust intended to employ Azure AD as the main directory and authentication source across all the sites. User management would use on-site AD but the actions would drive the automatic creation and updating of accounts into both Azure AD and G Suite.  The plan also included the adoption of Chromebooks as the preferred student device for both Office365 and G Suite, using Azure AD as the authentication source.

An earlier proposal involved synchronising data from each remote site. In this model each school would run Azure Active Directory (AD) Connect to provision user accounts into Office 365 (Azure AD) while the complementary service, Google Cloud Directory Sync (GCDS) maintained the accounts in G Suite. Each site would also run the Active Directory Federation service (ADFS) to authenticate users against the local AD database.

During the planning stage it was clear that this approach had a number of problems.

  • The rollout involved over 40 locations some of which were small primary schools with limited space and resource. Some sites required additional  investment on onsite hardware as well as software upgrades which only only added to the technical complexity and the ongoing support burden. The timescales for the deployment did not allow for an upgrade program.
  • A Google Organisation can only direct SSO towards a single external source. Therefore the plan to have have a unified Google organisation talking directly to multiple on-site ADFS sources couldn’t be supported.
  • Google Cloud Directory Sync (GCDS) was never designed to work in multi-site setup.  Without a complex set of exclusion rules and there was a real risk that accounts from one school could be suspended by a second copy of GCDC running at another site.  In a smaller deployment this might be manageable but the trust required a solution that could be scaled up without hitting a configuration bottleneck. Although running multiple G Suite organisations was one option this didn’t fit with the trusts overall strategy.

After a review period it was decided to trial a second approach that provisioned and authenticated users directly from Azure AD using the techniques described in an earlier post. Although this had proved successful for a single school there was no published documentation that described a more complex deployment involving multiple sites and domains. While this presented a significant risk the general approach had a number of benefits for the trust.

In order to support Office365 each school synchronised to the central Office365 tenancy using  Azure Active Directory (AD) Connect. Therefore a centralised directory that Google could query was already in place - Azure AD. However some of the larger schools maintained a local ADFS service while others simply synchronised local passwords into Azure AD. It was hoped that by pointing the G Suite organisation at Azure AD as the single target it would acts as broker, authenticating against local cloud accounts, synchronised password accounts or deferring to on-site ADFS as required.

Since Azure AD acts as an integrated directory for the entire trust it made sense to try and provision accounts into G Suite directly from this source rather from each local AD.  In this way configuration was centrally controlled and did not require onsite installations of GCDS. In fact the whole solution was zero-touch on the remote sites which enabled a much faster rollout schedule.


So can this approach be made to work in a MAT? Yes it can.

The trust now has G Suite authentication running centrally from the Azure AD database while maintaining day-to-day user administration through local AD at each site. Chromebooks present an Azure AD logon challenge on startup directed to the appropriate authentication source, either Azure or on-site ADFS.  There’s no requirement for GCDS as user accounts are created and suspended in G Suite using the auto-provisioning objects provided by MS Azure.

Essentially the solution proved to be a scaled up version of the technique outlined in the earlier post with a few tweaks.

In a follow up post we'll take a closer look at the challenges provided by user provisioning.

If you’re interested in attempting a similar program in your organisation please drop me a line through the contact form and I’d be happy to link you up with the expertise.

Tuesday 28 August 2018

The problem of local file shares in a SaaS school.

For schools moving to a SaaS based model, the requirement for local files is a difficult problem to solve.

This is particularly true of established sites that have curriculum requirements and admin workflows that depend heavily on traditional Windows file shares.  Copying the data to a cloud repository sounds like a good idea but when the Year 10 Media class try to open their video projects and 20 minutes later everybody is still looking at a spinning wheel you’d better expect some negative feedback.

Whether you work in the cloud or on premises the golden rule when working with large data sets is always the same: both the data and the application have to be in the same place.

If the curriculum demands that the application is Adobe Suite or a video editing package hosted on a high end workstation then the data has to be delivered from a local store to ensure an acceptable level of performance. However after installing the file server, the failover host and the multi-tier tape backup system you may find that your bold serverless initiative is now in tatters.

The answer is to have the file shares stored in the cloud but accessed from on-site which doesn’t sound possible, but is.

One of the many storage options offered by the Microsoft cloud service is Azure Files. It’s a simple concept that allows you to define an area of storage and then advertise like a standard Windows file share. You don’t have to worry about servers or redundancy, all that is handled by the fine folks at Microsoft. Users can consume and interact with the share in exactly the same way as if the share was hosted locally which solves the first part of the puzzle but not the second.

In response to this Microsoft have announced the general availability of a new service called Azure File Sync (AFS) which is a caching engine for Azure Files. The technical details are listed here  but in simple terms an agent on a local server synchronizes the contents of a Azure File store to local storage and then keeps the two stores in sync. Files now open with the speed of local storage but are hosted on the cloud.

That’s pretty clever but the advantages don’t stop there. Using another feature termed ‘cloud tiering’ it's possible to only synchronize the files that are currently active while keeping all the old stuff in the cloud. To any user browsing the share it would appear that all the files are stored locally. Users have access to all the file properties and permissions settings just as before, data is retrieved from Azure Files only when the file is opened. Access the same file a second time and the data is now considered ‘hot’ and will be maintained on the local drive with the changes written back to Azure as a background process.

The Year 10 media class issue is solved. You have the fast response of  a local filestore while still maintaining the advantages of cloud storage.

Lets run through a few of the other advantages you get with Azure File Sync.

If implemented properly the local server is reduced to the status of a cache appliance. By only holding copies of active files it can host a multi TB share while only having a 800 GB data drive and none of the local data requires backing up. Azure File Sync come with a useful facility that will enable fast recovery of the file system onto only any suitable hardware.  By turning aging file servers into cache appliances their useful life could be significantly extended.

The Azure file store can act as a central repository for a group.  By consolidating local file shares into a central store and then synchronizing data out to edge sites you get the benefits of a local share but without the problems of maintaining an on-site data silo. In this model all backups are done using Azure Backup services (no hardware required) and users have the ability to recovery data directly using the facility built into the Windows UI.

With all this sweetness there has to be a little sour.

There are number of technical limitations that still need ironing out.  The service has a 5TB limit on a single Azure Files storage group but this is expected to be increased to 100TB in the near future.

Azure File Sync uses a simple conflict-resolution strategy as it doesn’t doesn't pretend to be a fully featured collaboration platform. If a file is updated from two server end points at the same time the most recent change will recorded in the original file. The older update will result in a new file with the  name of the source server and a conflict number appended to the original file name. It’s up to the user to manually incorporate the changes if required.

However the biggest hurdle to general adoption within education might be the question “How much does it cost”?

Subscription charges based on usage apply to the central store and prices vary between each Azure region. At the time of writing a  1TB store using Local Redundancy in Europe West will cost around £46/month  (£553 pa). Standard data egress and transaction rates apply to the central store and any additional capacity and transactions used on the share snapshots. There’s also a cost to implement Azure File Sync which amounts to £5.50/month (£66 pa) per server end-point and a charge for any additional Azure backup services you may choose to employ.

So the answer to the question is;  for a 1TB store AFS will cost upwards of £553 pa but the actual cost will depend on a whole bunch of factors which you can’t estimate until you start using it.

Let’s assume the cost is £700 pa. I think that’s a pretty good deal for a fully managed distributed file system but the problem is:  £700 pa compared with what?

How many schools understand the cost of storing data locally? To the casual observer it might even appear free. After all both solutions still require local hardware and Microsoft’s generous EDU licencing agreements makes standing up another file server a cheap option.  So the argument could be raised: why put your data in the the cloud and then commit to an open ended charging scheme just to access it ?

From a personal point of view I can think of a dozen technical reasons why it would be a good idea to implement AFS but they could all be overridden by fears and concerns over pricing and lock in.  Microsoft response to all this is simple, reduce the subsidy that education receives for local licencing until Azure looks like an attractive option. It’s a long term strategy that will win out in the end.

In the short term it would be useful if education could find a way to place a real cost on local data storage, particularly in the brave new world GDPR, so they can properly evaluate SaaS offerings such as Azure File Sync. If not they could be missing out on a good deal.

Sunday 22 July 2018

Login as A but send email as B

Operating your G Suite inbox on a separate domain to your logon address.

In a simple world a school would create a Google organization using the internet domain employed by the external website and public email and that would be the end of it.

Unfortunately we don’t live in a simple world and the direct relationship between the email address and the user logon is often affected by a change in circumstance

For instance, the school might be planning to consolidate under a standardised Trust or District logon as part of a rebranding exercise. Its also possible that the organisation was originally created using an domain that was less than ideal.  After all east-walthamstow-college-of-arts.co.uk is a great descriptor but a tedious logon address.

Whatever the case most organizations are reluctant to give up an email address that is printed on stationary, external signage or embedded in software. So for a variety of reasons a school may require a different logon address to the one that routes mail.

In this example our fictional college wants to move to ewca.co.uk as the logon identifier for the long suffering students and staff but maintain east-walthamstow-college-of-arts.co.uk as the primary email address on all G Suite accounts. So how can this be achieved?


The first thing to understand is that in order to use ewca.co.uk for any purpose it must be registered within G Suite as a secondary domain. This process is fairly straightforward and is well documented by Google so there’s little point repeating it here.

Once the secondary domain is verified the G Suite  administrator has ability to upgrade each users primary address from student@east-walthamstow-college-of-arts.co.uk to student@ewca.co.uk. It’s a simple select and save action, Google takes care of all the backend housekeeping with a few advisory notes that are listed later.

As soon as the account is updated the user can logon as student@ewca.co.uk  while maintaining all the features and data of the original account. As a bonus the old student@east-walthamstow-college-of-arts.co.uk address is pinned to the account as an alias which allows the user to continue to receive mail.

Job done. Well not quite.

By default the primary (logon) address is the one that GMail uses when it  sends mail. Therefore although the user can receive mail on student@east-walthamstow-college-of-arts.co.uk
they are sending on  student@ewca.sch.uk which is not quite what we want.

Fortunately GMail has a way of fixing that.

In the 'Settings' dialog of the users GMail navigate to the 'Accounts' tab where you find the 'Send mail as' option.


Selecting 'Add another email address' allows the user to add the new alias as a send address after which it can be upgraded the new default (below).


Once that’s been done the account will send mail on the student@east-walthamstow-college-of-arts.co.uk address by default. The new address will be used to reply to mail regardless of the original send address.

This seems pretty straightforward but there’s an obvious problem. This is a user driven process, none of the actions can be controlled or managed from the admin console. Even allowing for a well informed student and staff body, publishing a crib-sheet for each user is going to lead to issues.

Mistyping the alias or failure to do anything at all will result in mail moving in unexpected directions. A manual process might be manageable for 100 users but not 1000+.

Faced with a problem like this the solution is always the same…  send for GAM.

GAM is a open source project that exposes the Google API’s as a simple command line interface that you can use to build batch files. It’s an essential component of every G Suite admins toolkit.

Fortunately GAM has a command for this, as it has for most functions.

gam user student sendas student@east-walthamstow-college-of-arts.co.uk "Student" replyto      student@east-walthamstow-college-of-arts.co.uk default treatasalias true 

It should be pretty clear how each of the elements in the command relate to the options in the user dialog show above.  Of course GAM can also help out with the first part  of the project, updating the users primary logon address using a command similar to that shown below.

gam update user student username student@ewca.co.uk

Therefore the process is reduced to running two GAM commands on each user account.
  • Change the users primary logon account to the new secondary domain.
  • Update the default send address to the email alias created by the first command.

Note: GAM provides methods to draw data from CSV files that will update 100’s of users in  single command .

For a large school it’s not recommended that to do this in the middle of the day. The process of renaming can take up to 10 minutes to propagate across all services and you must allow at least 24 hours for the directory to catch up. Like all major updates it should be handled using proper change control mechanisms and tested on a subset of users.

A couple of points worth noting.

The GMail user still has the ability to update the Send address by updating the configuration in the settings dialog. GAM will make change but it’s not locked down in any way. The more curious user can and probably will mess with this at some point.

Lastly, larger organisations may be using utilities such as Google Cloud Directory Sync to maintain G Suite user accounts which adds an additional degree of complexity. In this case the update to the primary user logon needs to be matched with a change in the synchronization rules otherwise you run the risk of creating duplicate accounts. Again it’s best to test on group of test users first.

Happy GAM’ing.

Saturday 9 June 2018

Is your schools IT about to fall off a cliff?

This might be a UK phenomenon but about this time each year school IT admins start to give some thought to the summer IT upgrade.

This task is often approached with a degree of fear and trepidation and with good reason. In just under two decades IT has moved from being an interesting novelty to a core function for both administration and teaching. I’m not sure when the decision was made to compel schools to maintain an on-premise datacenter with a support network that wouldn’t shame a medium size business, but that's where we find ourselves today.

Finding the right skill set to support this complex on-premise setup is difficult. The level of technology found in schools is similar to what you would find in the commercial sector and includes server virtualization, shared storage, directory systems, wireless, tape backup and client management. Therefore education competes in the same salary space as businesses that are better placed to offer attractive deals to qualified support staff. A common response is to share resource or enter into a support contract with an external party but it’s often the case that this ongoing cost wasn’t allowed for when the shiny new boxes were first installed.

The other problem is that the “shiny new boxes” are no longer shiny or new. In fact they are now over five years old a replacement program is now on the agenda. Even if they’re still providing a reliable service they are likely to be out-of-support and warranty. If they go wrong who are you going to call. Ghostbusters are not going to help you here.  Extended support contracts can be a option but the costs are often in the the same ballpark as a replacement program.


The problem is often compounded by the fact that the equipment was purchased as part of the same initial investment  program and therefore all the hardware has come to the end of support at the same time. Servers are one problem but you may have a networked storage, tape backup systems and networking that are all facing the same issue.

Let's face it, your schools IT system is about to fall off a cliff.


There’s no simple answer to this problem. The short term solution might be to find the money to paper over the cracks but in a another five years time you’ll have same problem - nothing has changed.

The sustainable approach is to make use of Software and a Service (SaaS) and start to decommission local server and storage hardware. This is no longer a complex technical problem. There are plenty of  toolsets and partners who can help you out and the task is probably no more onerous than replacing core systems.

The major sticking point is the fact that the school will have to change the way uses IT and this will impact teaching. 

Software that has been part of the curriculum for the last 15 years will have to reassessed and alternatives found. Familiar desktop systems may have to be upgraded to modern mobile friendly versions. Workflows and lesson plans based on email attachments, local storage and printed worksheets will need to be reevaluated in light of a system that encourages, and in some respect requires collaborative practices.

This is the hard bit but can it be any harder than emptying the coffers to pay for on-premise system whose only promise is to deliver the same service as five years ago but with added cost and complexity.

Friday 25 May 2018

Off Hours Device Profiles - A First Look.

The rumour that Google planned to support Off Hours device profiles for Chromebooks has been around for a while. An earlier post proposed how they might be used to support a Bring Your Own Chromebook (BYOC) policy for schools.

The basic idea was that a student could purchase a Chromebook and then use it in school operating under a security profile, only to revert back to a standard consumer device after 4 PM.  The benefits seemed to be obvious but the details were a bit vague because nine months ago scheduled device profiles didn’t exist - but they do now.

This post gives a brief description of how they work. As with most things Google it’s a simple idea that's been well implemented and the implications for future 1:1 programs could be significant.

The setting is found in the the Chromebook device area of the admin console in a new Off Hours policy.



The information required is pretty straightforward, the time zone for the schedule followed by a series of ON - OFF times.

In this example the policy is set to operate between 7:00 AM and 12:00 PM every Friday.

The dialog informs you that once set “some sign-in restrictions won’t apply”. The wording of the policy is a bit vague and the effect is not immediately obvious because, after saving the policy and rebooting, the Chromebook shows no change at all. Even though the policy should apply according to the time frame the user is still limited to organizational accounts only.

The change is only apparent after the organizational user logs on and then signs out which is a really nice feature. Effectively the Chromebook will only relax the security profile after being authenticated by a organisational account. Therefore if the Chromebook is left on the bus it’s not going to allow Guest Mode after 4 PM just because the policy applies at that time.

However after the organizational user signs in and out, it’s all change.

The organizational account requirement is lifted, guest mode is enabled and the user can log in with a standard consumer account.


Once logged in it's clear that the user session is set by a timer that's controlled by the Off Hours policy. In this case after 1 hour 50 minutes the session will terminate regardless of any internet access. Once the time is up it's goodbye Facebook and back to school for you !

So there you are, Off Hours device profiles. A very simple idea that provides a whole new way of putting Chromebooks into schools and no other client technology can do this.

Game changer is an overused term but in this case I'm not so sure.


Note: tested on a Chromebook running Beta V67.0.3396.57.

Tuesday 8 May 2018

Why app licencing is a dead dog for Chromebooks.

Managing Android apps on Chromebooks creates a number of challenges for schools, one of which is licencing.

Software licencing is not really a Chromebook, or even an app issue but a general problem that’s been around for as long as IT admins have been installing packages onto local devices.

Over the years there have been numerous attempts to manage the process including dongles (remember DESkey), key servers, USB devices, block and site codes, up to and including a trust relationship with the customer backed up with the threat of a visit from FAST.

While you have a limited number of machines that can be matched with an equally limited number of software titles the situation can be managed without too much stress. The problems start to emerge when you scale up to hundreds of devices and spirals out of control when you introduce customised application sets and shared device deployments. Unfortunately the majority of Chromebook/Android deployments fall squarely onto the last category - many users, on shared devices, all requiring a custom app set that's linked to the curriculum.

The current model for software deployment onto mobile devices uses the store concept  (Google PlayStore, Apple App Store and Microsoft Store) and while this is well suited to individual with a couple of devices, a personal credit card and a desperate need to play Candy Crush, it quickly falls apart when you apply it to large deployments of shared devices

Apple have gone through a number of iterations to solve the bulk licencing problem while Google tried to adapt the store model with Google Play for Education, only to run into the same predictable set of issues - it was inflexible and it didn’t scale.

To make things worse Google have another problem before they can make Android app licensing workable. The current deployment framework is based on an simple organisational tree which is ideal for general policy control but is entirely unsuitable for paid applications.  For this to work you need to deploy apps against user groups and currently that’s not possible.

So where does that leave us.



I think we have to accept that licencing at the point of install is a dead dog for app deployments onto shared Chromebooks (or any device).  We’ve tried it and it doesn't work - give it up.

I don't understand why local licencing is a problem that we are still trying to fix. It’s like an emotional attachment that we can’t quite shake off, the notion that value of the app lies space it consumes on local storage rather than the service it provides. Perhaps it’s something deeply embedded in the IT psyche from installing applications on Microsoft Windows for the last 20 years. 
It’s not 1998 anymore. The real value of the local app has migrated to backend services such as cloud based directories, remote storage, analytical dashboards and advanced API’s that expose a whole new range of functions that leave local computing in the Dark Ages. Surely a modern app is just gateway onto these processes, a convenient way to consume service through the native UI rather than the core proposition.

So why not make the installation free and charge for the value item - the backend service.

For example;
  • You can install the graphics app but to save the output to cloud storage and to gain access to the teacher dashboard you need to licence the Google account. 
  • You can install the language app for free but integration with Classroom and the features set that provides student metrics needs a subscription.
  • The science app needs a user licence to enable results to be shared with your project team and to work collaboratively.
All the licencing is handled by a supporting SaaS platform, hooking directly into the cloud directory. Providing a backend service adds an overhead but this shouldn’t be too much of an obstacle for a business that plans to sell tens of thousands of units into education. Doesn't everyone want to build a ‘platform’ rather than just an app ?

What are the advantages.
  • Deploy the app using a simple whitelist. Users can install and uninstall as required. 
  • You don’t need groups and the Google OU model works just fine. A quick check against the directory will confirm user access. If not, you just get the try-before-you-buy option.
  • Licences are based on active user accounts rather than device installs and can be automated against the directory service.
  • Schools gain analytical data from the app instead of a simple desktop process. Surely this is where the true value lies for education.
  • The model works fine in a BYOD rollout while local licencing just creates a heap of issues.

Local licensing for shared devices forces the store model to support a scenario for which it’s completely unsuited. The result makes deployment far more complicated than it needs to be and only succeeds in placing a barrier between the app and the educational consumer.  SaaS  has been using the “freemium” model for years and it’s been very successful, why should local apps be any different ?

I’m concerned that a lot of time and effort is going to be spent trying to make Play for Education V2 work for schools, only to see fail for the same reasons as the initial attempt or simply becoming irrelevant. Without having any data on this I suspect the most popular Android apps installed on Chromebooks are the productivity tools from both Google and Microsoft Office 365 - which use exactly this model.

With everybody on the information super-highway speeding towards on-demand access and subscription billing I have a feeling that the pay-to-install model might just end up as roadkill.

Friday 27 April 2018

SSO from Chromebooks to Azure AD.


********************************************************************
This post has been extended with an related article that references the Azure user interface for 2019 and also includes some tips and tricks gained from recent deployments.

You can find the new post here.

********************************************************************

Following on from a recent post showing how to auto-provision users from Azure to Google G Suite it seems like a good idea to complete the picture by describing Single Sign-On (SSO) from Google to Azure AD. The idea is you can pick up a Chromebook and be presented with a Microsoft dialog rather than the standard Google login challenge.

Like the user provisioning example this procedure does not require local federation (ADFS) but relies on the equivalent cloud based service bundled with the Azure AD Free tier. Under this arrangement users can get SSO access for up to ten apps which is pretty generous as the whole of Google G Suite is considered to be a single app.

For this example we’ll use the xmastaff@xmaacademy.org, account that the Azure provisioning service created for us in the earlier blog. It follows that the xmaacademy.org domain must be configured in both Azure and Google and the user account logon name is the same on both systems.

The configuration setting for both auto provisioning and SSO can be found in the Azure Portal under Enterprise Apps.



Clicking in the New Application icon presents you with comprehensive list of SaaS applications that come with pre-built templates that allow them to integrate with Azure directory services. Again the best way of finding the right option is to simply search for “google apps”.

Choosing this option opens up a blade on the right that allows you to enter some basic config  details and name the service. Once you are back at the main menu you can configure SSO which turns out to be quite straightforward compared with working with ADFS.


From the drop menu select SAML-based Sign-on. This creates a whole range of input boxes.

The key fields are Sign on URL and Identifier which can be customized to suit your setup. The data below worked for me.

Sign on URL: 
https://apps.google.com/user/hub

Identifier:         
google.com


From here the default settings should suffice. Make sure that the User Identifier is set to user.userprincipalname.



In the SAML Signing Certificate section check the box Make certificate active and save the config. Download and store the certificate (*.crt) that’s created for you.




All that’s left to do is to collect some information and transfer this to Google, luckily Microsoft makes this pretty easy.  Clicking on the section below provides a simple tutorial on how to update Google G Suite.



The key information you need is conveniently listed in the Quick Reference section towards the bottom of the help dialog.



The values will be different for your Azure tenancy but you will need both URL’s and the certificate you downloaded earlier before heading off to the Google Admin console.


Configuring G Suite and Chromebooks for SSO.
Single Sign On  is managed under the Security icon within the Set up single sign-on (SSO) section.
One note of warning, currently its not possible to turn on SSO for a subset of Google users. Continuing down this route will  require all your non-admin G Suite users to authenticate with their Azure AD credentials


In the dialog shown above paste in the URL’s into the relevant fields  and upload the certificate. The Change password URL is standard for all configs:

https://account.activedirectory.windowsazure.com/changepassword.aspx

If you encounter errors, try saving each entry in turn. In particular the certificate upload seems to work better as a single load and save action.

Optionally, check the Use a domain-specific issuer box to enable a domain-specific issuer. If you enable this feature, Google sends an issuer specific to your domain, google.com/a/your_domain.com, where your_domain.com is replaced with your actual domain name.

If you don't check the box to enable a domain-specific issuer when you set up SSO, Google sends the standard issuer, google.com as recorded in the Identifier field in the Azure section above.

When you are ready, check-in the Setup SSO option but be aware this action will affect every account in your organisation. Unchecking the option turns off SSO without the removing all the data.

As a simple test, logging into the Chrome browser will now pass the request to Azure for authentication. Once you have that working you can prepare the Chromebooks to accept third party authentication by setting some some additional device and user policies.

In Chrome Management  - User settings search for "SAML". Enable SAML-based SSO and set the logon frequency.



In Chrome Management  - Device settings search for "SAML" again and allow users to go directly to the SAML SSO page.



So long as the user and device are within the scope of the new policies the chromebook will now present the Microsoft Azure login page instead of the standard Google dialog, which I must admit looks a little weird when you first see it.



Remember, authentication takes place against the Azure directory - so the user account needs to be in Azure with a matching account in Google to provide the user policies. Typing in a valid Google account without an Azure account won’t pass the test. In the same way trying to gain access using a valid Azure account that does not match an active G Suite account gives a Google error.



Conclusion.
The whole process is pretty simple and allows Chromebook to authenticate against Azure AD (MS Office 365) without requiring local servers and the complexity of ADFS.

However the big advantage of running  federation services in the cloud is fault tolerance.  Creating a highly available cluster to support a tier one component like authentication is not something schools should be doing anymore.  Federation is just a cloud service like everything else, you might as well make use of it.

So can you do it the other way round and use the Google SAML federation services to answer for Azure logins?   Surprisingly yes.

It's now possible, under specific circumstances to logon to Windows using a G Suite account.

https://blog.theserverlessschool.net/2019/07/using-google-to-authenticate-microsoft.html




Thursday 5 April 2018

How to drop Windows apps on a Chromebook.

Here’s something to consider over a coffee: What does a Microsoft Windows computer actually do?

Since the majority of school computers in the UK run Windows and a lot of time and money is spent keeping those devices working you would hope it’s something pretty important.

Perhaps it’s keeping all of your files and data safe ?

That’s true but security and file storage is actually managed by the backroom server while file management is a standard feature of every mainstream operating system, just like printing and browsing the web. These are important functions but any desktop (or mobile) operating system could fulfil those roles. So what’s the answer ?

The fact is, you need a Windows computer to run Windows programs.

All the other stuff like virus protection, security patching, backup and all the fancy user interface features and dialogs just allow the operating system to run Windows programs in a secure and predictable manner.

It’s been a long time since the Windows operating system  itself added anything useful to the user experience (Minesweeper?). In fact the main challenge for the school administrator is to lock-down the desktop to ensure users have as little contact as possible with the underlying feature set.

I suspect the ideal configuration for a Windows desktop in schools would be a template of shortcuts linked to the main productivity apps with some additional icons for logging off and rebooting.

Even with this minimalist approach the network admin still has to deploy and update each application while making sure an installation of one application doesn't break all the others as well patching the underlying OS.

Over the last decade there have been multiple attempts to fix this problem including terminal services and VDI but in many respects they only make the problem worse. You have to add additional server hardware, manage even more instances of the operating system and, after all that effort, the local desktop doesn't even get the chance to do the one thing it's good at - which is running Windows programs.

Let’s be honest if you were starting from scratch you’d think of a better way of doing this.

So lets kick the bucket and think about what that alternative might look like might look like.


The local device would be lightweight, easily managed, simple to licence, fundamentally secure, self maintaining and provide the base functions of file management, print and web browsing.   The system should start quickly, present a security challenge and then simply act as a platform to launch the apps that you need to do your work done. In many respects you are describing Google’s Chrome OS, the operating system that runs on a Chromebook.

Alternatively this imaginary device could be running Windows 10 S mode which is basically a Windows 10 Pro with a simplified, locked-down configuration which also meets some of the criteria listed above,

Unfortunately the one feature that both operating systems lack is the ability to run the type of legacy Windows program that education uses on a day to day basis. Chrome OS is a non-starter for this purpose and in order to run a local copy of SmartNotebook or a specialised STEM program on Windows 10 S you have to upgrade to the Windows 10 Pro edition which takes us back to where we started.

On the face of it the problem seems unsolvable but there may be a solution on the horizon.

Purchase a Chromebook today and it can run any Android app from the Google Play Store. The Android Skype app knows nothing about Chrome OS and believes it’s running on fully featured Android stack (V6.0 Marshmallow).  In fact it’s a clever trick that makes use of technology commonly described as containers.

Containers allow the underlying OS to present itself in different ways to processes that it’s hosting. The idea is similar to machine virtualization but in this case it not the hardware that virtualized but the operating system kernel and for this reason is very lightweight and carries few additional overheads. This is how a Chromebook with only 4GB of memory can appear to run two operating systems at the same time. Another important feature of a container is that it ensures the isolation of the running processes which means it’s very secure.

So if containers can represent an Android run-time environment what else can they do?

Recent articles suggest that Chromebooks will soon have the ability to present Linux as a container which means that schools could safely access a range of open-source software rich in code development and media editing titles.

Which leads to the final point - could Chromebooks run a Windows application in a container?

The answer appears to be a qualified yes.
Recently Droplet technology announced a deployment package that can do just this. The hosted applications behaves exactly as it would if it were running in on a Windows operating system  - because it is.

All the technology runs locally and works without an active internet connection.  It’s not an emulation or a graphical offload, the application runs natively and is responsive and fully featured.

But let’s bring the conversation back to reality. If you have a school whose curriculum is based heavily on MS Office and locally installed Windows applications then your future is best served by the toolset provided by Microsoft.

But for schools moving towards SaaS the technical direction is often blocked by a dozen or so Windows programs that are central  to the curriculum. This is a problem that a container technology like Droplet could easily solve.

Eventually containers could be used to deploy applications across a range of OS types (including Windows) creating a true ‘run anywhere’ solution that doesn’t require a mass of backend server hardware.

There are still a number of problems to overcome, mainly around managing resources on a shared deployment but the future of local applications lies with containers. It’s a proven technology that underpins most cloud services and its about to make a splash in the mainstream market.

Tuesday 27 March 2018

Synchronizing cloud directories - Part 2



This post is the second a two part series that examines the user provisioning capabilities of Microsoft Office 365 and Google G Suite in a serverless world.

Part 2 : Azure Active Directory (Microsoft Office 365) into Google.

In the last post we saw how it was possible to configure automatic user provisioning from Google G Suite into Microsoft Azure AD (Office 365) in a situation where Microsoft defers to the Google user directory for SSO.

What are the options if you want to use Microsoft as the master directory and automatically create users in Google G Suite?
This configuration would be normally supported by Google Cloud Directory Sync (GCDS) except that GCDS takes data from a local domain controller not Azure AD.


The future lies with cloud based directory services but without local AD and tools like Google Cloud Directory Sync how do you keep everything in step?  Like Google G Suite, Microsoft Azure AD has a built-in service to help out with this - Azure User Provisioning.

Setting up Azure User Provisioning to Google G Suite.
The configuration setting for auto provisioning into G Suite can be found in the Azure Portal under Enterprise Apps.

Note: If you are not familiar with the navigation in the Azure UI he easiest way of finding the settings is to simply search for “enterprise apps”.



Clicking in the New Application icon presents you with comprehensive list of SaaS applications that come with pre-built templates that allow them to integrate with Azure directory services. Again the best way of finding the right option is to simply search for “google apps”.



Choosing this option opens up a blade on the right that allows you to enter the config  details.  There’s not much of interest here except the ability to change the name of the service which might seem unnecessary but, as we’ll see later, this marks a fundamental difference in how the Google and Azure services operate.  For this demonstration I renamed the service Google Apps - XMAAcademy.org.



After selecting ADD you are placed into a Quick Start wizard that we wish to avoid so just search for ‘enterprise apps’ to get back to the main menu. Alternatively you can select All Services and find Enterprise Applications in the Security and Identity section. It might be a good idea to take this opportunity set it as a favourite by highlighting the star.  It saves all the searching.



Once back in the main dialog you now have a new application listed which can be selected for more options.



The main menu allows you to configure single sign on (SSO) with Google Suite but it’s the Provisioning option we're interested in. Like G Suite you can setup provisioning without having SSO in place but in Azure you can to straight to the option without having to step through a wizard which is a bonus.



Opening the provisioning dialog gives you the option for manual or automatic. Once automatic is selected you get access to all the configuration options.



The process requires an Google account with admin rights so it’s best to create a user specifically for the role before you get to this point. Once the account details have been entered you can check authentication with Test Connection button.



Note: For whatever reason my setup lost connection status on a number of occasions and I was required to re-enter the account details and re-test the connection. So if your synchronisation stops for any reason this might be the first thing to check.

The mappings section controls the relationship between the object attributes in Azure and Google. From the dialog below it’s clear that the provisioning service is capable of synchronizing both users and groups with separate controls for each.



In most cases the default mappings do not need to be adjusted but the dialog has a few interesting features that are worth examination.



First is that each update action can be set independently, For instance you can allow the process to update records but not create them. I’m not sure why this might be useful but the option is available.



The settings section allows you turn provisioning on and off as well as restarting the process which forces a resync of all the objects in scope. The scope defines which user and group objects to synchronize to Google. Google Suite uses the position of the user account in the OU tree and group membership to determine the provisioning scope while Azure has a slightly different approach.

The scope can be set to All users or All Assigned Users as shown in the dialog above. An assigned user is an account that has the Google SaaS app granted to it. Allocation is controlled from the root dialog adding either groups or user accounts to the app.

Any group that is selected automatically places the group and the group members into scope.



The option All Users and Groups has the potential of placing every account in the Azure tenancy in scope without assigning the Google app. At this point a second factor can be used to control the account set.



Back in the attributes mapping section you'll find an option to control the scope of both user and group accounts based on the object attributes (above).  These are termed Scoping filters.  In this way it’s possible to create a rule that just specifies user accounts with ‘Google’ in extensionAttribute1 for example.



Scoping filtering can be used with both assigned and non-assigned users and although they don't reference the full set of object attributes they include a comprehensive set of logic functions that includes REGEX operators. If multiple scoping clauses are present, they are evaluated using AND logic. Between app assignments, groups and scoping filters, you a fair amount of control over the provisioning process.

Going live with User Provisioning.
Starting provisioning is as simple as changing the status in the master dialog from OFF to ON.  The dialog also gives you the ability to force a resync as well as a summary section.



The full log can be found back in the main app menu under the Audit logs icon.



The audit log lists all events for the preceding seven days with a search option which is extremely useful when troubleshooting missing G Suite accounts or incomplete group memberships.

So what does this look like from the Google GSuite viewpoint?

All user accounts are created in the root of the G Suite organisation. There's currently no way to provision an account directly into a sub-OU in order to apply a specific policy.   At the moment I can’t see any additional controls for deprovisioning users. By default if a user moves out of scope in Azure the Google account is automatically suspended which is probably the required action anyway.



As you might expect the Google audit logs show user events being actioned by the G Suite provisioning account from a remote IP.


Deploying and Using Azure Provisioning.
Closer inspection reveals a subtle difference in the way  G Suite and Azure provisioning works.

The Azure sync references local state to determine which accounts to provision or suspend. This means if you decide to assign four accounts in the Azure domain xmaacademy.org into a Google domain that already contains 200 active accounts in the same domain, it will just create those four accounts - without suspending the original 200 because they are out of scope. Azure only manages Azure accounts that have been placed into or out of scope.

Google works the other way round. It assumes that since you are managing xmaacademy.org then all the Azure accounts in the same domain will be managed. In this respect it makes no distinction between an account that was in scope but was subsequently removed (and therefore should be suspended) and an account that was never in scope. Both accounts types get suspended.

Unlike GCDS the Google sync process doesn’t  support exclusion rules for the target domain. For new implementations this is not really an issue but you have to be a careful joining two directories that already populated. It’s probably a good idea to make sure Google accounts already exist and are in scope for all Azure users unless you want to handle a mass suspension when you hit the button for the first time.


Controlling Multiple Google Organisations for a Single Azure Tenancy.
Another interesting feature of Azure sync is that you can create more than one instance of the provisioning process.

The Google process is strictly one-to-one in that one Google Organisation can only sync to one Azure tenancy. You could separately manage sub-domains within this tenancy but each Google organisation can only push data into one Azure AD.



In contrast Azure AD allows you to create multiple instance of the Google provisioning process, each with its own scoping rules and authentication details that could reference different Google organizations.  This feature could prove useful for district and educational trust that need to maintain multiple Google organizations from a single cloud directory. It’s technically possible to do this with Google Cloud Directory sync but it’s a risky business and doesn’t really scale.


Acknowledgements
Thanks to Tom Cox at St Illtyd's Catholic High School in Cardiff, Wales for taking the plunge into Azure User Provisioning and helping me work though some the examples described above.

Other Posts.
Auto-provisioning is normally the partner to SSO between Microsoft Azure and Google G Suite. If you are planning to use Chromebooks as a super-simple platform for Microsoft Office 365 and would like your devices to authenticate against your Azure accounts the setup is described here.