tag line

moving IT to the cloud with service not servers

Tuesday, 18 September 2018

Managing G Suite & Office365 in a MAT

A cloud based approach to Single Sign On and user provisioning.


An IT setup that combines both Microsoft Office365 and Google G Suite is more common than you might expect, especially in UK schools. Managing both platforms is fairly straightforward for a single site but things start to get interesting when you attempt you try to incorporate multiple schools, all feeding back into a central organization - a situation commonly found within regional districts and trusts.

In a series of posts we’ll examine one approach to management based on a recent engagement with a Multi-Academy Trust (MAT) that planned to incorporate Google G Suite across a number of schools currently using Microsoft Office365. The goal was not to replace Microsoft Office365 but to provide each school with open access to both services. The trust was looking to manage both Office365 and G Suite as a single tenancy/organisation with each school existing as independent policy unit (sub-organisation).

As a prime objective the trust intended to employ Azure AD as the main directory and authentication source across all the sites. User management would use on-site AD but the actions would drive the automatic creation and updating of accounts into both Azure AD and G Suite.  The plan also included the adoption of Chromebooks as the preferred student device for both Office365 and G Suite, using Azure AD as the authentication source.

An earlier proposal involved synchronising data from each remote site. In this model each school would run Azure Active Directory (AD) Connect to provision user accounts into Office 365 (Azure AD) while the complementary service, Google Cloud Directory Sync (GCDS) maintained the accounts in G Suite. Each site would also run the Active Directory Federation service (ADFS) to authenticate users against the local AD database.

During the planning stage it was clear that this approach had a number of problems.

  • The rollout involved over 40 locations some of which were small primary schools with limited space and resource. Some sites required additional  investment on onsite hardware as well as software upgrades which only only added to the technical complexity and the ongoing support burden. The timescales for the deployment did not allow for an upgrade program.
  • A Google Organisation can only direct SSO towards a single external source. Therefore the plan to have have a unified Google organisation talking directly to multiple on-site ADFS sources couldn’t be supported.
  • Google Cloud Directory Sync (GCDS) was never designed to work in multi-site setup.  Without a complex set of exclusion rules and there was a real risk that accounts from one school could be suspended by a second copy of GCDC running at another site.  In a smaller deployment this might be manageable but the trust required a solution that could be scaled up without hitting a configuration bottleneck. Although running multiple G Suite organisations was one option this didn’t fit with the trusts overall strategy.

After a review period it was decided to trial a second approach that provisioned and authenticated users directly from Azure AD using the techniques described in an earlier post. Although this had proved successful for a single school there was no published documentation that described a more complex deployment involving multiple sites and domains. While this presented a significant risk the general approach had a number of benefits for the trust.

In order to support Office365 each school synchronised to the central Office365 tenancy using  Azure Active Directory (AD) Connect. Therefore a centralised directory that Google could query was already in place - Azure AD. However some of the larger schools maintained a local ADFS service while others simply synchronised local passwords into Azure AD. It was hoped that by pointing the G Suite organisation at Azure AD as the single target it would acts as broker, authenticating against local cloud accounts, synchronised password accounts or deferring to on-site ADFS as required.

Since Azure AD acts as an integrated directory for the entire trust it made sense to try and provision accounts into G Suite directly from this source rather from each local AD.  In this way configuration was centrally controlled and did not require onsite installations of GCDS. In fact the whole solution was zero-touch on the remote sites which enabled a much faster rollout schedule.


So can this approach be made to work in a MAT? Yes it can.

The trust now has G Suite authentication running centrally from the Azure AD database while maintaining day-to-day user administration through local AD at each site. Chromebooks present an Azure AD logon challenge on startup directed to the appropriate authentication source, either Azure or on-site ADFS.  There’s no requirement for GCDS as user accounts are created and suspended in G Suite using the auto-provisioning objects provided by MS Azure.

Essentially the solution proved to be a scaled up version of the technique outlined in the earlier post with a few tweaks. Certainly there were some important lessons learnt during the process and we’ll be outlining these in more detail in a future post.

If you’re interested in attempting a similar program in your organisation please drop me a line through the contact form and I’d be happy to link you up with the expertise.

Tuesday, 28 August 2018

The problem of local file shares in a SaaS school.

For schools moving to a SaaS based model, the requirement for local files is a difficult problem to solve.

This is particularly true of established sites that have curriculum requirements and admin workflows that depend heavily on traditional Windows file shares.  Copying the data to a cloud repository sounds like a good idea but when the Year 10 Media class try to open their video projects and 20 minutes later everybody is still looking at a spinning wheel you’d better expect some negative feedback.

Whether you work in the cloud or on premises the golden rule when working with large data sets is always the same: both the data and the application have to be in the same place.

If the curriculum demands that the application is Adobe Suite or a video editing package hosted on a high end workstation then the data has to be delivered from a local store to ensure an acceptable level of performance. However after installing the file server, the failover host and the multi-tier tape backup system you may find that your bold serverless initiative is now in tatters.

The answer is to have the file shares stored in the cloud but accessed from on-site which doesn’t sound possible, but is.

One of the many storage options offered by the Microsoft cloud service is Azure Files. It’s a simple concept that allows you to define an area of storage and then advertise like a standard Windows file share. You don’t have to worry about servers or redundancy, all that is handled by the fine folks at Microsoft. Users can consume and interact with the share in exactly the same way as if the share was hosted locally which solves the first part of the puzzle but not the second.

In response to this Microsoft have announced the general availability of a new service called Azure File Sync (AFS) which is a caching engine for Azure Files. The technical details are listed here  but in simple terms an agent on a local server synchronizes the contents of a Azure File store to local storage and then keeps the two stores in sync. Files now open with the speed of local storage but are hosted on the cloud.

That’s pretty clever but the advantages don’t stop there. Using another feature termed ‘cloud tiering’ it's possible to only synchronize the files that are currently active while keeping all the old stuff in the cloud. To any user browsing the share it would appear that all the files are stored locally. Users have access to all the file properties and permissions settings just as before, data is retrieved from Azure Files only when the file is opened. Access the same file a second time and the data is now considered ‘hot’ and will be maintained on the local drive with the changes written back to Azure as a background process.

The Year 10 media class issue is solved. You have the fast response of  a local filestore while still maintaining the advantages of cloud storage.

Lets run through a few of the other advantages you get with Azure File Sync.

If implemented properly the local server is reduced to the status of a cache appliance. By only holding copies of active files it can host a multi TB share while only having a 800 GB data drive and none of the local data requires backing up. Azure File Sync come with a useful facility that will enable fast recovery of the file system onto only any suitable hardware.  By turning aging file servers into cache appliances their useful life could be significantly extended.

The Azure file store can act as a central repository for a group.  By consolidating local file shares into a central store and then synchronizing data out to edge sites you get the benefits of a local share but without the problems of maintaining an on-site data silo. In this model all backups are done using Azure Backup services (no hardware required) and users have the ability to recovery data directly using the facility built into the Windows UI.

With all this sweetness there has to be a little sour.

There are number of technical limitations that still need ironing out.  The service has a 4TB limit on a single Azure Files storage group but this is expected to be increased to 100TB in the near future.

Azure File Sync uses a simple conflict-resolution strategy as it doesn’t doesn't pretend to be a fully featured collaboration platform. If a file is updated from two server end points at the same time the most recent change will recorded in the original file. The older update will result in a new file with the  name of the source server and a conflict number appended to the original file name. It’s up to the user to manually incorporate the changes if required.

However the biggest hurdle to general adoption within education might be the question “How much does it cost”?

Subscription charges based on usage apply to the central store and prices vary between each Azure region. At the time of writing a  1TB store using Local Redundancy in Europe West will cost around £46/month  (£553 pa). Standard data egress and transaction rates apply to the central store and any additional capacity and transactions used on the share snapshots. There’s also a cost to implement Azure File Sync which amounts to £5.50/month (£66 pa) per server end-point and a charge for any additional Azure backup services you may choose to employ.

So the answer to the question is;  for a 1TB store AFS will cost upwards of £553 pa but the actual cost will depend on a whole bunch of factors which you can’t estimate until you start using it.

Let’s assume the cost is £700 pa. I think that’s a pretty good deal for a fully managed distributed file system but the problem is:  £700 pa compared with what?

How many schools understand the cost of storing data locally? To the casual observer it might even appear free. After all both solutions still require local hardware and Microsoft’s generous EDU licencing agreements makes standing up another file server a cheap option.  So the argument could be raised: why put your data in the the cloud and then commit to an open ended charging scheme just to access it ?

From a personal point of view I can think of a dozen technical reasons why it would be a good idea to implement AFS but they could all be overridden by fears and concerns over pricing and lock in.  Microsoft response to all this is simple, reduce the subsidy that education receives for local licencing until Azure looks like an attractive option. It’s a long term strategy that will win out in the end.

In the short term it would be useful if education could find a way to place a real cost on local data storage, particularly in the brave new world GDPR, so they can properly evaluate SaaS offerings such as Azure File Sync. If not they could be missing out on a good deal.

Sunday, 22 July 2018

Login as A but send email as B

Operating your G Suite inbox on a separate domain to your logon address.

In a simple world a school would create a Google organization using the internet domain employed by the external website and public email and that would be the end of it.

Unfortunately we don’t live in a simple world and the direct relationship between the email address and the user logon is often affected by a change in circumstance

For instance, the school might be planning to consolidate under a standardised Trust or District logon as part of a rebranding exercise. Its also possible that the organisation was originally created using an domain that was less than ideal.  After all east-walthamstow-college-of-arts.co.uk is a great descriptor but a tedious logon address.

Whatever the case most organizations are reluctant to give up an email address that is printed on stationary, external signage or embedded in software. So for a variety of reasons a school may require a different logon address to the one that routes mail.

In this example our fictional college wants to move to ewca.co.uk as the logon identifier for the long suffering students and staff but maintain east-walthamstow-college-of-arts.co.uk as the primary email address on all G Suite accounts. So how can this be achieved?


The first thing to understand is that in order to use ewca.co.uk for any purpose it must be registered within G Suite as a secondary domain. This process is fairly straightforward and is well documented by Google so there’s little point repeating it here.

Once the secondary domain is verified the G Suite  administrator has ability to upgrade each users primary address from student@east-walthamstow-college-of-arts.co.uk to student@ewca.co.uk. It’s a simple select and save action, Google takes care of all the backend housekeeping with a few advisory notes that are listed later.

As soon as the account is updated the user can logon as student@ewca.co.uk  while maintaining all the features and data of the original account. As a bonus the old student@east-walthamstow-college-of-arts.co.uk address is pinned to the account as an alias which allows the user to continue to receive mail.

Job done. Well not quite.

By default the primary (logon) address is the one that GMail uses when it  sends mail. Therefore although the user can receive mail on student@east-walthamstow-college-of-arts.co.uk
they are sending on  student@ewca.sch.uk which is not quite what we want.

Fortunately GMail has a way of fixing that.

In the 'Settings' dialog of the users GMail navigate to the 'Accounts' tab where you find the 'Send mail as' option.


Selecting 'Add another email address' allows the user to add the new alias as a send address after which it can be upgraded the new default (below).


Once that’s been done the account will send mail on the student@east-walthamstow-college-of-arts.co.uk address by default. The new address will be used to reply to mail regardless of the original send address.

This seems pretty straightforward but there’s an obvious problem. This is a user driven process, none of the actions can be controlled or managed from the admin console. Even allowing for a well informed student and staff body, publishing a crib-sheet for each user is going to lead to issues.

Mistyping the alias or failure to do anything at all will result in mail moving in unexpected directions. A manual process might be manageable for 100 users but not 1000+.

Faced with a problem like this the solution is always the same…  send for GAM.

GAM is a open source project that exposes the Google API’s as a simple command line interface that you can use to build batch files. It’s an essential component of every G Suite admins toolkit.

Fortunately GAM has a command for this, as it has for most functions.

gam user student sendas student@east-walthamstow-college-of-arts.co.uk "Student" replyto      student@east-walthamstow-college-of-arts.co.uk default treatasalias true 

It should be pretty clear how each of the elements in the command relate to the options in the user dialog show above.  Of course GAM can also help out with the first part  of the project, updating the users primary logon address using a command similar to that shown below.

gam update user student username student@ewca.co.uk

Therefore the process is reduced to running two GAM commands on each user account.
  • Change the users primary logon account to the new secondary domain.
  • Update the default send address to the email alias created by the first command.

Note: GAM provides methods to draw data from CSV files that will update 100’s of users in  single command .

For a large school it’s not recommended that to do this in the middle of the day. The process of renaming can take up to 10 minutes to propagate across all services and you must allow at least 24 hours for the directory to catch up. Like all major updates it should be handled using proper change control mechanisms and tested on a subset of users.

A couple of points worth noting.

The GMail user still has the ability to update the Send address by updating the configuration in the settings dialog. GAM will make change but it’s not locked down in any way. The more curious user can and probably will mess with this at some point.

Lastly, larger organisations may be using utilities such as Google Cloud Directory Sync to maintain G Suite user accounts which adds an additional degree of complexity. In this case the update to the primary user logon needs to be matched with a change in the synchronization rules otherwise you run the risk of creating duplicate accounts. Again it’s best to test on group of test users first.

Happy GAM’ing.

Saturday, 9 June 2018

Is your schools IT about to fall off a cliff?

This might be a UK phenomenon but about this time each year school IT admins start to give some thought to the summer IT upgrade.

This task is often approached with a degree of fear and trepidation and with good reason. In just under two decades IT has moved from being an interesting novelty to a core function for both administration and teaching. I’m not sure when the decision was made to compel schools to maintain an on-premise datacenter with a support network that wouldn’t shame a medium size business, but that's where we find ourselves today.

Finding the right skill set to support this complex on-premise setup is difficult. The level of technology found in schools is similar to what you would find in the commercial sector and includes server virtualization, shared storage, directory systems, wireless, tape backup and client management. Therefore education competes in the same salary space as businesses that are better placed to offer attractive deals to qualified support staff. A common response is to share resource or enter into a support contract with an external party but it’s often the case that this ongoing cost wasn’t allowed for when the shiny new boxes were first installed.

The other problem is that the “shiny new boxes” are no longer shiny or new. In fact they are now over five years old a replacement program is now on the agenda. Even if they’re still providing a reliable service they are likely to be out-of-support and warranty. If they go wrong who are you going to call. Ghostbusters are not going to help you here.  Extended support contracts can be a option but the costs are often in the the same ballpark as a replacement program.


The problem is often compounded by the fact that the equipment was purchased as part of the same initial investment  program and therefore all the hardware has come to the end of support at the same time. Servers are one problem but you may have a networked storage, tape backup systems and networking that are all facing the same issue.

Let's face it, your schools IT system is about to fall off a cliff.


There’s no simple answer to this problem. The short term solution might be to find the money to paper over the cracks but in a another five years time you’ll have same problem - nothing has changed.

The sustainable approach is to make use of Software and a Service (SaaS) and start to decommission local server and storage hardware. This is no longer a complex technical problem. There are plenty of  toolsets and partners who can help you out and the task is probably no more onerous than replacing core systems.

The major sticking point is the fact that the school will have to change the way uses IT and this will impact teaching. 

Software that has been part of the curriculum for the last 15 years will have to reassessed and alternatives found. Familiar desktop systems may have to be upgraded to modern mobile friendly versions. Workflows and lesson plans based on email attachments, local storage and printed worksheets will need to be reevaluated in light of a system that encourages, and in some respect requires collaborative practices.

This is the hard bit but can it be any harder than emptying the coffers to pay for on-premise system whose only promise is to deliver the same service as five years ago but with added cost and complexity.

Friday, 25 May 2018

Off Hours Device Profiles - A First Look.

The rumour that Google planned to support Off Hours device profiles for Chromebooks has been around for a while. An earlier post proposed how they might be used to support a Bring Your Own Chromebook (BYOC) policy for schools.

The basic idea was that a student could purchase a Chromebook and then use it in school operating under a security profile, only to revert back to a standard consumer device after 4 PM.  The benefits seemed to be obvious but the details were a bit vague because nine months ago scheduled device profiles didn’t exist - but they do now.

This post gives a brief description of how they work. As with most things Google it’s a simple idea that's been well implemented and the implications for future 1:1 programs could be significant.

The setting is found in the the Chromebook device area of the admin console in a new Off Hours policy.



The information required is pretty straightforward, the time zone for the schedule followed by a series of ON - OFF times.

In this example the policy is set to operate between 7:00 AM and 12:00 PM every Friday.

The dialog informs you that once set “some sign-in restrictions won’t apply”. The wording of the policy is a bit vague and the effect is not immediately obvious because, after saving the policy and rebooting, the Chromebook shows no change at all. Even though the policy should apply according to the time frame the user is still limited to organizational accounts only.

The change is only apparent after the organizational user logs on and then signs out which is a really nice feature. Effectively the Chromebook will only relax the security profile after being authenticated by a organisational account. Therefore if the Chromebook is left on the bus it’s not going to allow Guest Mode after 4 PM just because the policy applies at that time.

However after the organizational user signs in and out, it’s all change.

The organizational account requirement is lifted, guest mode is enabled and the user can log in with a standard consumer account.


Once logged in it's clear that the user session is set by a timer that's controlled by the Off Hours policy. In this case after 1 hour 50 minutes the session will terminate regardless of any internet access. Once the time is up it's goodbye Facebook and back to school for you !

So there you are, Off Hours device profiles. A very simple idea that provides a whole new way of putting Chromebooks into schools and no other client technology can do this.

Game changer is an overused term but in this case I'm not so sure.


Note: tested on a Chromebook running Beta V67.0.3396.57.

Tuesday, 8 May 2018

Why app licencing is a dead dog for Chromebooks.

Managing Android apps on Chromebooks creates a number of challenges for schools, one of which is licencing.

Software licencing is not really a Chromebook, or even an app issue but a general problem that’s been around for as long as IT admins have been installing packages onto local devices.

Over the years there have been numerous attempts to manage the process including dongles (remember DESkey), key servers, USB devices, block and site codes, up to and including a trust relationship with the customer backed up with the threat of a visit from FAST.

While you have a limited number of machines that can be matched with an equally limited number of software titles the situation can be managed without too much stress. The problems start to emerge when you scale up to hundreds of devices and spirals out of control when you introduce customised application sets and shared device deployments. Unfortunately the majority of Chromebook/Android deployments fall squarely onto the last category - many users, on shared devices, all requiring a custom app set that's linked to the curriculum.

The current model for software deployment onto mobile devices uses the store concept  (Google PlayStore, Apple App Store and Microsoft Store) and while this is well suited to individual with a couple of devices, a personal credit card and a desperate need to play Candy Crush, it quickly falls apart when you apply it to large deployments of shared devices

Apple have gone through a number of iterations to solve the bulk licencing problem while Google tried to adapt the store model with Google Play for Education, only to run into the same predictable set of issues - it was inflexible and it didn’t scale.

To make things worse Google have another problem before they can make Android app licensing workable. The current deployment framework is based on an simple organisational tree which is ideal for general policy control but is entirely unsuitable for paid applications.  For this to work you need to deploy apps against user groups and currently that’s not possible.

So where does that leave us.



I think we have to accept that licencing at the point of install is a dead dog for app deployments onto shared Chromebooks (or any device).  We’ve tried it and it doesn't work - give it up.

I don't understand why local licencing is a problem that we are still trying to fix. It’s like an emotional attachment that we can’t quite shake off, the notion that value of the app lies space it consumes on local storage rather than the service it provides. Perhaps it’s something deeply embedded in the IT psyche from installing applications on Microsoft Windows for the last 20 years. 
It’s not 1998 anymore. The real value of the local app has migrated to backend services such as cloud based directories, remote storage, analytical dashboards and advanced API’s that expose a whole new range of functions that leave local computing in the Dark Ages. Surely a modern app is just gateway onto these processes, a convenient way to consume service through the native UI rather than the core proposition.

So why not make the installation free and charge for the value item - the backend service.

For example;
  • You can install the graphics app but to save the output to cloud storage and to gain access to the teacher dashboard you need to licence the Google account. 
  • You can install the language app for free but integration with Classroom and the features set that provides student metrics needs a subscription.
  • The science app needs a user licence to enable results to be shared with your project team and to work collaboratively.
All the licencing is handled by a supporting SaaS platform, hooking directly into the cloud directory. Providing a backend service adds an overhead but this shouldn’t be too much of an obstacle for a business that plans to sell tens of thousands of units into education. Doesn't everyone want to build a ‘platform’ rather than just an app ?

What are the advantages.
  • Deploy the app using a simple whitelist. Users can install and uninstall as required. 
  • You don’t need groups and the Google OU model works just fine. A quick check against the directory will confirm user access. If not, you just get the try-before-you-buy option.
  • Licences are based on active user accounts rather than device installs and can be automated against the directory service.
  • Schools gain analytical data from the app instead of a simple desktop process. Surely this is where the true value lies for education.
  • The model works fine in a BYOD rollout while local licencing just creates a heap of issues.

Local licensing for shared devices forces the store model to support a scenario for which it’s completely unsuited. The result makes deployment far more complicated than it needs to be and only succeeds in placing a barrier between the app and the educational consumer.  SaaS  has been using the “freemium” model for years and it’s been very successful, why should local apps be any different ?

I’m concerned that a lot of time and effort is going to be spent trying to make Play for Education V2 work for schools, only to see fail for the same reasons as the initial attempt or simply becoming irrelevant. Without having any data on this I suspect the most popular Android apps installed on Chromebooks are the productivity tools from both Google and Microsoft Office 365 - which use exactly this model.

With everybody on the information super-highway speeding towards on-demand access and subscription billing I have a feeling that the pay-to-install model might just end up as roadkill.

Friday, 27 April 2018

SSO from Chromebooks to Azure AD.

Following on from a recent post showing how to auto-provision users from Azure to Google G Suite it seems like a good idea to complete the picture by describing Single Sign-On (SSO) from Google to Azure AD. The idea is you can pick up a Chromebook and be presented with a Microsoft dialog rather than the standard Google login challenge.

Like the user provisioning example this procedure does not require local federation (ADFS) but relies on the equivalent cloud based service bundled with the Azure AD Free tier. Under this arrangement users can get SSO access for up to ten apps which is pretty generous as the whole of Google G Suite is considered to be a single app.

For this example we’ll use the xmastaff@xmaacademy.org, account that the Azure provisioning service created for us in the earlier blog. It follows that the xmaacademy.org domain must be configured in both Azure and Google and the user account logon name is the same on both systems.

The configuration setting for both auto provisioning and SSO can be found in the Azure Portal under Enterprise Apps.



Clicking in the New Application icon presents you with comprehensive list of SaaS applications that come with pre-built templates that allow them to integrate with Azure directory services. Again the best way of finding the right option is to simply search for “google apps”.

Choosing this option opens up a blade on the right that allows you to enter some basic config  details and name the service. Once you are back at the main menu you can configure SSO which turns out to be quite straightforward compared with working with ADFS.


From the drop menu select SAML-based Sign-on. This creates a whole range of input boxes.

The key fields are Sign on URL and Identifier which can be customized to suit your setup. The data below worked for me.

Sign on URL: 
https://apps.google.com/user/hub

Identifier:         
google.com


From here the default settings should suffice. Make sure that the User Identifier is set to user.userprincipalname.



In the SAML Signing Certificate section check the box Make certificate active and save the config. Download and store the certificate (*.crt) that’s created for you.




All that’s left to do is to collect some information and transfer this to Google, luckily Microsoft makes this pretty easy.  Clicking on the section below provides a simple tutorial on how to update Google G Suite.



The key information you need is conveniently listed in the Quick Reference section towards the bottom of the help dialog.



The values will be different for your Azure tenancy but you will need both URL’s and the certificate you downloaded earlier before heading off to the Google Admin console.


Configuring G Suite and Chromebooks for SSO.
Single Sign On  is managed under the Security icon within the Set up single sign-on (SSO) section.
One note of warning, currently its not possible to turn on SSO for a subset of Google users. Continuing down this route will  require all your non-admin G Suite users to authenticate with their Azure AD credentials


In the dialog shown above paste in the URL’s into the relevant fields  and upload the certificate. The Change password URL is standard for all configs:

https://account.activedirectory.windowsazure.com/changepassword.aspx

If you encounter errors, try saving each entry in turn. In particular the certificate upload seems to work better as a single load and save action.

Optionally, check the Use a domain-specific issuer box to enable a domain-specific issuer. If you enable this feature, Google sends an issuer specific to your domain, google.com/a/your_domain.com, where your_domain.com is replaced with your actual domain name.

If you don't check the box to enable a domain-specific issuer when you set up SSO, Google sends the standard issuer, google.com as recorded in the Identifier field in the Azure section above.

When you are ready, check-in the Setup SSO option but be aware this action will affect every account in your organisation. Unchecking the option turns off SSO without the removing all the data.

As a simple test, logging into the Chrome browser will now pass the request to Azure for authentication. Once you have that working you can prepare the Chromebooks to accept third party authentication by setting some some additional device and user policies.

In Chrome Management  - User settings search for "SAML". Enable SAML-based SSO and set the logon frequency.



In Chrome Management  - Device settings search for "SAML" again and allow users to go directly to the SAML SSO page.



So long as the user and device are within the scope of the new policies the chromebook will now present the Microsoft Azure login page instead of the standard Google dialog, which I must admit looks a little weird when you first see it.



Remember, authentication takes place against the Azure directory - so the user account needs to be in Azure with a matching account in Google to provide the user policies. Typing in a valid Google account without an Azure account won’t pass the test. In the same way trying to gain access using a valid Azure account that does not match an active G Suite account gives a Google error.



Conclusion.
The whole process is pretty simple and allows Chromebook to authenticate against Azure AD (MS Office 365) without requiring local servers and the complexity of ADFS.

However the big advantage of running  federation services in the cloud is fault tolerance.  Creating a highly available cluster to support a tier one component like authentication is not something schools should be doing anymore.  Federation is just a cloud service like everything else, you might as well make use of it.

So can you do it the other way round, use the Google SAML federation services to answer for Azure logins?   Unfortunately the answer is "not really" and certainly not in a supported manner.

Although Microsoft Azure can quite happily defer to any number of third party identity providers, Google isn’t on the list so there’s little chance of seeing a Windows 10 device sporting a funky Google logon in the short term. Pity!


Thursday, 5 April 2018

How to drop Windows apps on a Chromebook.

Here’s something to consider over a coffee: What does a Microsoft Windows computer actually do?

Since the majority of school computers in the UK run Windows and a lot of time and money is spent keeping those devices working you would hope it’s something pretty important.

Perhaps it’s keeping all of your files and data safe ?

That’s true but security and file storage is actually managed by the backroom server while file management is a standard feature of every mainstream operating system, just like printing and browsing the web. These are important functions but any desktop (or mobile) operating system could fulfil those roles. So what’s the answer ?

The fact is, you need a Windows computer to run Windows programs.

All the other stuff like virus protection, security patching, backup and all the fancy user interface features and dialogs just allow the operating system to run Windows programs in a secure and predictable manner.

It’s been a long time since the Windows operating system  itself added anything useful to the user experience (Minesweeper?). In fact the main challenge for the school administrator is to lock-down the desktop to ensure users have as little contact as possible with the underlying feature set.

I suspect the ideal configuration for a Windows desktop in schools would be a template of shortcuts linked to the main productivity apps with some additional icons for logging off and rebooting.

Even with this minimalist approach the network admin still has to deploy and update each application while making sure an installation of one application doesn't break all the others as well patching the underlying OS.

Over the last decade there have been multiple attempts to fix this problem including terminal services and VDI but in many respects they only make the problem worse. You have to add additional server hardware, manage even more instances of the operating system and, after all that effort, the local desktop doesn't even get the chance to do the one thing it's good at - which is running Windows programs.

Let’s be honest if you were starting from scratch you’d think of a better way of doing this.

So lets kick the bucket and think about what that alternative might look like might look like.


The local device would be lightweight, easily managed, simple to licence, fundamentally secure, self maintaining and provide the base functions of file management, print and web browsing.   The system should start quickly, present a security challenge and then simply act as a platform to launch the apps that you need to do your work done. In many respects you are describing Google’s Chrome OS, the operating system that runs on a Chromebook.

Alternatively this imaginary device could be running Windows 10 S mode which is basically a Windows 10 Pro with a simplified, locked-down configuration which also meets some of the criteria listed above,

Unfortunately the one feature that both operating systems lack is the ability to run the type of legacy Windows program that education uses on a day to day basis. Chrome OS is a non-starter for this purpose and in order to run a local copy of SmartNotebook or a specialised STEM program on Windows 10 S you have to upgrade to the Windows 10 Pro edition which takes us back to where we started.

On the face of it the problem seems unsolvable but there may be a solution on the horizon.

Purchase a Chromebook today and it can run any Android app from the Google Play Store. The Android Skype app knows nothing about Chrome OS and believes it’s running on fully featured Android stack (V6.0 Marshmallow).  In fact it’s a clever trick that makes use of technology commonly described as containers.

Containers allow the underlying OS to present itself in different ways to processes that it’s hosting. The idea is similar to machine virtualization but in this case it not the hardware that virtualized but the operating system kernel and for this reason is very lightweight and carries few additional overheads. This is how a Chromebook with only 4GB of memory can appear to run two operating systems at the same time. Another important feature of a container is that it ensures the isolation of the running processes which means it’s very secure.

So if containers can represent an Android run-time environment what else can they do?

Recent articles suggest that Chromebooks will soon have the ability to present Linux as a container which means that schools could safely access a range of open-source software rich in code development and media editing titles.

Which leads to the final point - could Chromebooks run a Windows application in a container?

The answer appears to be a qualified yes.
Recently Droplet technology announced a deployment package that can do just this. The hosted applications behaves exactly as it would if it were running in on a Windows operating system  - because it is.

All the technology runs locally and works without an active internet connection.  It’s not an emulation or a graphical offload, the application runs natively and is responsive and fully featured.

But let’s bring the conversation back to reality. If you have a school whose curriculum is based heavily on MS Office and locally installed Windows applications then your future is best served by the toolset provided by Microsoft.

But for schools moving towards SaaS the technical direction is often blocked by a dozen or so Windows programs that are central  to the curriculum. This is a problem that a container technology like Droplet could easily solve.

Eventually containers could be used to deploy applications across a range of OS types (including Windows) creating a true ‘run anywhere’ solution that doesn’t require a mass of backend server hardware.

There are still a number of problems to overcome, mainly around managing resources on a shared deployment but the future of local applications lies with containers. It’s a proven technology that underpins most cloud services and its about to make a splash in the mainstream market.

Tuesday, 27 March 2018

Synchronizing cloud directories - Part 2



This post is the second a two part series that examines the user provisioning capabilities of Microsoft Office 365 and Google G Suite in a serverless world.

Part 2 : Azure Active Directory (Microsoft Office 365) into Google.

In the last post we saw how it was possible to configure automatic user provisioning from Google G Suite into Microsoft Azure AD (Office 365) in a situation where Microsoft defers to the Google user directory for SSO.

What are the options if you want to use Microsoft as the master directory and automatically create users in Google G Suite?
This configuration would be normally supported by Google Cloud Directory Sync (GCDS) except that GCDS takes data from a local domain controller not Azure AD.


The future lies with cloud based directory services but without local AD and tools like Google Cloud Directory Sync how do you keep everything in step?  Like Google G Suite, Microsoft Azure AD has a built-in service to help out with this - Azure User Provisioning.

Setting up Azure User Provisioning to Google G Suite.
The configuration setting for auto provisioning into G Suite can be found in the Azure Portal under Enterprise Apps.

Note: If you are not familiar with the navigation in the Azure UI he easiest way of finding the settings is to simply search for “enterprise apps”.



Clicking in the New Application icon presents you with comprehensive list of SaaS applications that come with pre-built templates that allow them to integrate with Azure directory services. Again the best way of finding the right option is to simply search for “google apps”.



Choosing this option opens up a blade on the right that allows you to enter the config  details.  There’s not much of interest here except the ability to change the name of the service which might seem unnecessary but, as we’ll see later, this marks a fundamental difference in how the Google and Azure services operate.  For this demonstration I renamed the service Google Apps - XMAAcademy.org.



After selecting ADD you are placed into a Quick Start wizard that we wish to avoid so just search for ‘enterprise apps’ to get back to the main menu. Alternatively you can select All Services and find Enterprise Applications in the Security and Identity section. It might be a good idea to take this opportunity set it as a favourite by highlighting the star.  It saves all the searching.



Once back in the main dialog you now have a new application listed which can be selected for more options.



The main menu allows you to configure single sign on (SSO) with Google Suite but it’s the Provisioning option we're interested in. Like G Suite you can setup provisioning without having SSO in place but in Azure you can to straight to the option without having to step through a wizard which is a bonus.



Opening the provisioning dialog gives you the option for manual or automatic. Once automatic is selected you get access to all the configuration options.



The process requires an Google account with admin rights so it’s best to create a user specifically for the role before you get to this point. Once the account details have been entered you can check authentication with Test Connection button.



Note: For whatever reason my setup lost connection status on a number of occasions and I was required to re-enter the account details and re-test the connection. So if your synchronisation stops for any reason this might be the first thing to check.

The mappings section controls the relationship between the object attributes in Azure and Google. From the dialog below it’s clear that the provisioning service is capable of synchronizing both users and groups with separate controls for each.



In most cases the default mappings do not need to be adjusted but the dialog has a few interesting features that are worth examination.



First is that each update action can be set independently, For instance you can allow the process to update records but not create them. I’m not sure why this might be useful but the option is available.



The settings section allows you turn provisioning on and off as well as restarting the process which forces a resync of all the objects in scope. The scope defines which user and group objects to synchronize to Google. Google Suite uses the position of the user account in the OU tree and group membership to determine the provisioning scope while Azure has a slightly different approach.

The scope can be set to All users or All Assigned Users as shown in the dialog above. An assigned user is an account that has the Google SaaS app granted to it. Allocation is controlled from the root dialog adding either groups or user accounts to the app.

Any group that is selected automatically places the group and the group members into scope.



The option All Users and Groups has the potential of placing every account in the Azure tenancy in scope without assigning the Google app. At this point a second factor can be used to control the account set.



Back in the attributes mapping section you'll find an option to control the scope of both user and group accounts based on the object attributes (above).  These are termed Scoping filters.  In this way it’s possible to create a rule that just specifies user accounts with ‘Google’ in extensionAttribute1 for example.



Scoping filtering can be used with both assigned and non-assigned users and although they don't reference the full set of object attributes they include a comprehensive set of logic functions that includes REGEX operators. If multiple scoping clauses are present, they are evaluated using AND logic. Between app assignments, groups and scoping filters, you a fair amount of control over the provisioning process.

Going live with User Provisioning.
Starting provisioning is as simple as changing the status in the master dialog from OFF to ON.  The dialog also gives you the ability to force a resync as well as a summary section.



The full log can be found back in the main app menu under the Audit logs icon.



The audit log lists all events for the preceding seven days with a search option which is extremely useful when troubleshooting missing G Suite accounts or incomplete group memberships.

So what does this look like from the Google GSuite viewpoint?

All user accounts are created in the root of the G Suite organisation. There's currently no way to provision an account directly into a sub-OU in order to apply a specific policy.   At the moment I can’t see any additional controls for deprovisioning users. By default if a user moves out of scope in Azure the Google account is automatically suspended which is probably the required action anyway.



As you might expect the Google audit logs show user events being actioned by the G Suite provisioning account from a remote IP.


Deploying and Using Azure Provisioning.
Closer inspection reveals a subtle difference in the way  G Suite and Azure provisioning works.

The Azure sync references local state to determine which accounts to provision or suspend. This means if you decide to assign four accounts in the Azure domain xmaacademy.org into a Google domain that already contains 200 active accounts in the same domain, it will just create those four accounts - without suspending the original 200 because they are out of scope. Azure only manages Azure accounts that have been placed into or out of scope.

Google works the other way round. It assumes that since you are managing xmaacademy.org then all the Azure accounts in the same domain will be managed. In this respect it makes no distinction between an account that was in scope but was subsequently removed (and therefore should be suspended) and an account that was never in scope. Both accounts types get suspended.

Unlike GCDS the Google sync process doesn’t  support exclusion rules for the target domain. For new implementations this is not really an issue but you have to be a careful joining two directories that already populated. It’s probably a good idea to make sure Google accounts already exist and are in scope for all Azure users unless you want to handle a mass suspension when you hit the button for the first time.


Controlling Multiple Google Organisations for a Single Azure Tenancy.
Another interesting feature of Azure sync is that you can create more than one instance of the provisioning process.

The Google process is strictly one-to-one in that one Google Organisation can only sync to one Azure tenancy. You could separately manage sub-domains within this tenancy but each Google organisation can only push data into one Azure AD.



In contrast Azure AD allows you to create multiple instance of the Google provisioning process, each with its own scoping rules and authentication details that could reference different Google organizations. I haven’t tested this but I see no reason why you couldn't connect a single Azure domain to many Google organisations, each controlling a separate custom domain.

This feature could prove useful for district and educational trust that need to maintain multiple Google organizations from a single cloud directory. It’s technically possible to do this with Google Cloud Directory sync but it’s a risky business and doesn’t really scale.


Acknowledgements
Thanks to Tom Cox at St Illtyd's Catholic High School in Cardiff, Wales for taking the plunge into Azure User Provisioning and helping me work though some the examples described above.

Other Posts.
Auto-provisioning is normally the partner to SSO between Microsoft Azure and Google G Suite. If you are planning to use Chromebooks as a super-simple platform for Microsoft Office 365 and would like your devices to authenticate against your Azure accounts the setup is described here.



Sunday, 18 March 2018

Synchronizing cloud directories - Part 1

This post is the first of a two part series that examines the user provisioning capabilities of Microsoft Office 365 and Google G Suite in a serverless world.

Part 1 : Google into Azure Active Directory (Microsoft Office 365).

In the UK the majority of  schools and colleges rely on a local installation of Microsoft Active Directory (AD) for directory services. For schools that choose to implement Google G Suite, which has its own user directory the choice is either to manually update both user databases or use a tool such as Google Cloud Directory Sync (GCDS) to keep the two account sets in step.

But the future doesn’t lie with on-premise directory services. Microsoft’s intention is to move towards a cloud based resource, namely Azure Active Directory (AAD), the user database that underpins the Office 365 productivity suite.

So without local AD and GCDS how do you keep two cloud based directories in sync?

This post describes how you can configure Google G Suite to synchronize user account data with Azure Active Directory (MS Office 365) without GCDS.  In a follow up post we’ll look at how you can drive data in the opposite direction, Azure Active Directory into Google G Suite.

Requirements.
Both techniques assume that the two cloud databases are setup for Single Sign-On (SSO).  In this example the Google database is the master, a user logs onto Office 365 and the request is passed to Google for verification. For this to occur an account for the Google user must exist in Azure Active Directory, which is why Single-Sign-On and automatic user provisioning are processes that are closely linked.

However, both Microsoft and Google allow you to deploy user provisioning without having SSO in place so it can be used in situations where users manage their own passwords. Also while the procedures for setting up SSO are generally well documented, the user provisioning aspect remains a well kept secret so it’s worth a closer look.

In this example we have an Office 365 tenancy and Google G Suite organization that both host the xmaacademy.org domain.  The objective will be to create a user called  bill.gates@xmaacademy.org in Google G Suite and automatically provision the same account in Office 365. All subsequent updates and changes to the Google account will also be replicated.

The Office 365 auto provisioning feature can be found in the SAML Apps icon in the G Suite admin console. Clicking on the yellow + button allows you to select from a number of SaaS platforms.  The  majority are business related and currently only a few support user provisioning as indicated by the tick symbol. Fortunately Microsoft Office 365 is one of these.




Clicking on Microsoft Office 365 launches a wizard which collects information relating to SSO. Unless you actually intend to setup SSO none of this is relevant to user provisioning but you have to complete the wizard to get to relevant section as shown below.


As this dialog explains, nothing is actioned regarding SSO until you actually upload the IDP data to the SaaS provider but you need to complete the wizard to expose user provisioning as an additional function.  Selecting the SETUP NOW  button drops you into a configuration dialog with the option to SET UP USER PROVISIONING.




The first step is to authorize the action with Office 365 and for this you’ll need an Azure AD  account with admin rights for the tenancy. This is a one off operation with the option to update the account information at a later stage through a general config dialog.



The next set of dialogs configures the attribute mappings, provisioning scopes and the de-provisioning actions in a similar manner to Google Cloud Directory Sync. We'll look at these one by one.

Attribute Mapping.
This section controls how the user values in Google relate to data fields in Azure AD.
To make things easier, three of the mandatory mappings are filled with default values. The exception is onPremisedImmutuanbleID which is blank.

Unfortunately Google doesn’t really give you any clue as to what this value should be. For this example I used Basic Information > Username which allowed me to save the dialog and operate successfully when I tested the system.


This dialog is actually has a filter which can be expanded by clicking on the Show All option. The additional options are also set with expected values with the exception of department and job title which are blank by default.


Provisioning Scope
This simple dialog allows you to enter one or more Google group names.



If you define a group (or a number of groups), only the members of the group will be subject to the provisioning rules for this SaaS app. This an important feature because it overcomes one of the major limitations of the Google organizational structure - namely a user can only be a member of one sub-organisation.

As a result you cannot control curriculum based SaaS apps through sub-organisations alone because a deployment of the type below is impossible.

Sara       History sub-organisation - controls deployment of History SaaS app.
Philip      History sub-organisation - controls deployment of History SaaS app.

Philip     French sub-organisation - controls deployment of French SaaS app.
Jill          French sub-organisation - controls deployment of French SaaS app.

Jill          Maths sub-organisation - controls deployment of Maths SaaS app.
Sara       Maths sub-organisation - controls deployment of Maths SaaS app.


Sara need the History apps and the Maths app but can't be a member of both sub-organisations.

However by using groups you can allocate the SaaS app at the highest level in the sub-org tree and control access using groups. Because a user can be a member of many groups the problem is solved.

To gain access to the SaaS app the user must be a member of a sub-organisation that has the apps turned on AND be a member of a provisioning scope group, if one is defined.


Deprovisioning Scope
The first action in this dialog controls what happens if the app is turned OFF for the user. This occurs when the user is moved into an sub-org  that has the SaaS app turned off or the user account is removed from the provisioning group.



The default action is to suspend the user account in Office 365 within 24 hours with the option to delete the account after a fixed period of time. This period can be set to within 24 hours, after 1 day, after 7 days, or after 30 days. In testing the suspend action always occurs within 60 mins and was sometimes almost immediate.

Separate rules control the action on user suspension and deletion but with the same set of options.  A suspend action in G Suite translates into a blocked login setting in Office 365. A hard deleted account can only be restored without data loss using Office 365 Admin Center up to 30 days after deletion.


Going live with User Provisioning.
Once all the dialogs have been completed you are passed back to the main page which give you the option to reselect any the dialogs and update the data.



The last step is to turn provisioning ON but as the dialog above informs you can’t do that while the SaaS app is turned OFF for all users, that wouldn’t make a lot of sense.

Some careful thought needs to go into turning on user provisioning. Allocating Office 365 to the root of your Google organization without a group filter has the capability to create a large number of user accounts in Office 365, some of which may not be needed. To provide a finer level of control you should allocate the Office 365 SaaS app to the highest point in the org tree and then control access through a Google Group.

Therefore the first task is to create a Google group to hold a number of test user accounts (in this case bill.gates@xmaacademy.org) and apply this group to the Provisioning Scope dialog shown in the previous section.  This is your failsafe mechanism.

After the provisioning group is created and populated you can turn on the Office 365 SaaS app for an appropriate level on your org tree using the standard dialog shown below.





Once completed you have the ability to ACTIVATE PROVISIONING as shown below. You can turn provisioning ON and OFF at any time without deallocating the Office 365 app. However you need to be very careful not to deallocate the app while user provisioning is left on as this takes all your users out of scope and starts the deprovisioning process on Office 365.



Selecting the ACTIVATE PROVISIONING button shows the warning below, allowing you to back out before starting the process with the ACTIVATE button.



The top level dialog now shows the option to DEACTIVATE PROVISIONING and the summary section will display user data after a short delay (refresh the panel).


Although not immediately obvious the summary data value is actually a link to a filtered view on the admin report log which is a nice feature.



By examining the filter list on the Admin Audit report log you can see the extensive list of events related to auto-provisioning. Clearly this is the first stop for troubleshooting any issues.



So what does this look like from the Office 365 viewpoint?

You have a choice of Microsoft admin portals to view the user data but the account will look like this in the Azure portal



or like this in the Office 365 portal




The newly created Microsoft  account is unlicensed and therefore will not have an Exchange mailbox, InTune management or any other resource. Unlike G Suite for Business there’s no default licence policy, the complexity and range of Microsoft licensing would make that a little difficult. Microsoft’s long term strategy is to assign licensing through group membership but this is still in a beta phase. Also, as you may have noticed, there’s no facility to sync group membership from Google to Office 365 so manual licensing and group allocation is going to be the way forward, at least for now.


User Updates.
Updating Google user accounts works as expected. Changes to first name, family name and the logon address all  pass through to Azure AD and are recorded in the audit log.



Removing a user from the Google provisioning group blocks the login in Office 365 while the reverse action restores the logon rights.

Copying user metadata was a bit more hit and miss. The Telephone field was replicated into the Azure AM Mobile field but not Department, Manager or Title for some reason.

One negative is the fact that there doesn't seem to be any way to gracefully force a sync. It seems to work on a schedule of about an hour but sometimes goes through much quicker, it’s hard to perceive a pattern. Also there’s no simulation mode which is such a useful feature in  GCDS.


Other Considerations.
If you’re provisioning into an empty Office 365 tenancy there's little opportunity to mess up but if you have existing AAD accounts you have to be more careful. For instance if you have the following accounts set up in Azure AD as well as Google G Suite.

student1@xmaacademy.org
student2@xmaacademy.org

and then test auto-provision using a new Google account

student3@xmaacademy.org

which is the only member of the Office365 group and in a subOU for which Office365 is turned on, the effect might not be what you expect. The student3@xmaacademy.org account will be created but the same sync will suspend student1@xmaacademy.org and student2@xmaacademy.org because, as far as the sync process is concerned, they are not in scope for the SaaS application. The only exception to this rule are admin accounts in Azure AD which are never suspended. Make sure all your existing AAD accounts are in scope before turning on provisioning otherwise logon rights will be removed.


Use Cases.
The most obvious role for automatic user provisioning is create user accounts in Office 365 when deferring authentication to Google for single sign on. Unfortunately Microsoft doesn't support that configuration.

Although Azure Active Directory can be configured to work with just about every identity provider in the market, Google isn’t on the list.

You might think this is a bit strange since Google allows Microsoft 365 as a SSO partner in the SAML apps section but there’s a catch. If you read the small print in the Google help pages there’s a phrase that gives the game away

Step five states “Set up a federation of your On-Premises Active Directory and Azure Active Directory.”  Cough, cough did I really read that!

So to allow Google to authenticate AAD accounts you must have a federation between AAD and a local Active Directory and since AD probably hosted on a dusty old domain controller sitting in a server room the whole the cloud based vision goes out the window. After all if you have an on-premise AD you might as well use GCDS and Microsoft's Azure AD Connect to keep everything in step.

So why doesn’t Microsoft allow it’s Azure accounts to be authenticated by Google as a fully supported partner? There could be a technical reason but the simple fact is that platform that owns the directory also owns the user and both Google and Microsoft want to be the cloud directory of choice for the future.

So has this whole exercise been a waste of time - well not quite

Even without SSO, user provisioning into Azure from Google can play a useful role.

Consider a small school that plans to standardize on Google G Suite but also has a requirement to operate a suite of Window 10 laptops in a secure way. They have little intention (and even less cash) to operate and maintain a highly available, fault tolerant on-premises data-center to manage sixty laptops.  It’s a school after all, not a merchant bank,

In this situation, adopting Microsoft’s new cloud based model you could enroll devices into AAD and manage them with InTune, which is fine but when the user logs on they still need to reference an Azure user account. In this situation automatic user-provisioning from Google would fit the bill.

Users would need to maintain their own passwords but user administration could be managed through the G Suite console. Simple, cheap and even more important - serverless.



In the next post we’ll take a look running automatic user provisioning in the other direction, Microsoft Azure to Google G Suite.

Part 2 : Azure Active Directory (Microsoft Office 365) into Google.