Mobile Device Management System for a Telecom Provider

Background

A multinational conglomerate owns a telecommunication provider and manufacturer subsidiary that operates in all markets throughout the world. The consumer market for mobile devices is crowded and the need to meet customer expectations for features and functions is every growing and accelerating. The need to innovate hardware also requires the ability to deliver these new features and functions over the air (OTA) to keep up with security patches, new features and marketing campaigns and promotions.

Telecommunications Industry Impact

Telecommunication providers have the need to provision digital goods and services through an increasing number of devices.  Devices are no longer necessarily centrally owned and distributed by the telecommunication provider. The consumerization of telecommunications in general has lead to interoperability requirements based on industry protocols and standards that require more sophisticated device management capabilities.

Core Imperative

To be able to get a customer registered within a network and to maintain a consistent high quality level of service with updates, the telecommunication provider needed to create a new generation of mobile device management system (MDM).  The MDM system would be built on top of a set of microservices so that it could be continuously evolved and also be leveraged for both internally among the telecommunication provider’s business units as well as externally with the telecommunication provider’s supply chain providers and their go to market retail partners.

Customer’s Impact

The telecommunication provider’s ability to provision and manage mobile devices numbering in the billions for both current and previous generation models is constrained by not only the company’s resources but also by the fact that the ongoing service aspect of such devices is not their core business focus.  By partnering and empowering their partners with a MDM system that gives them centralized control but distributed operation, the provider can ensure a high level of service and scale with their end user’s demand.

Legacy Challenge

The existing MDM capabilities at the telecommunication provider still services the previous generation devices that are still in use by its earliest customers.  The existing system has a rich set of data including: user profile, device profile and network settings that are customized for each class of user and each class of device.  This data and operation would still need to be utilized in the new MDM capabilities and be made available to the provider’s partners who would take over the provisioning and service maintenance aspects of the business that is growing rapidly.

Domain Driven Development

LunchBadger and the telecommunciation provider utilized domain driven development to simply and accelerate the development of the MDM system.  The MDM system consisted of a new set of microservices that utilized legacy data and entities within the application domain. Each entity was developed as simple model based functions. The functions would have methods to emulate the different actions and behavior of each entity as “models”. The application domain for the MDM system is described in more detail below.

 

Mobile devices can be reconfigured over-the-air after it has been shipped out of a factory. The system  may use a cloud-based server to store configuration objects (binaries) periodically published by device vendors. In this domain the entities of interest could be device (characteristics), vendor profiles, configuration objects, etc. (We will get into the details of such a system in following sections.)

 

It may be worth recalling that all the microservices in an application need not use the same data store or persistence mechanism. So also, the entities in our domain may be mapped to different databases, depending on the data structures and data access requirements involved. Domain driven development does not imply a monolithic data layer.

Device Provisioning System

The following is a walkthrough of the provider’s MDM system built using LunchBadger. Actual references including entities and property names have been changed for confidentiality purposes.

 

The MDM system will be referred to as the “Device Provisioning System”. It will enable device vendors to apply configuration settings to mobile devices, months after it has been shipped out of the factory. For example a vendor could apply over-the-air configuration settings to turn a mobile device into a point-of-sale terminal, or a survey response collector or a digital signage.

 

The central component of this system will be a Device Manager that exposes APIs for:

 

  •         Vendors to upload configuration settings (probably self-installable binaries) for one or more models of mobile devices. Actually apart from the device vendors, there may be third party ecosystem players, called Partners, who offer such device reconfiguration capabilities.
  •         Device users to register / enroll with the device manager so that available configuration settings can be retrieved and installed on demand

Considering the problem domain we can identify at least three entities:

 

  •         PartnerDevice: Actually all it means is a way to associate a configuration setting with the provider of the configuration capability, or a Partner. This entity should include a model name, that is a device model to which the configuration setting can be applied.
  •         UserAgent: This will represent the mobile device itself, which can be configured over the air. Therefore, information like make, model, manufacturer, IMEI number, and probably the current location of the device (latitude and longitude) may be captured in this entity.
  •         ConfigObject: This is the actual binary object that can be downloaded and probably auto-installed on a mobile device, based on user preferences.

Figure: DEVICE PROVISIONING SYSTEM

We will build this application on top of the Express Serverless Platform.

Step-by-step Implementation

Let’s first set up the connector for storing some of the data in our problem domain.

 

Quite often the individual microservices that constitute an application use different data persistence mechanisms or data stores. To drive home this capability, we will store our data as follows:
The PartnerDevice entity primarily maps a device model to a configuration object. But wait. Isn’t the configuration object more like semi-structured or unstructured data / a binary string?

 

Well it can be stored in an RDBMS, but to make it more interesting, we will store the configuration object in MongoDB. So, in a full-fledged application, the configuration objects could be JSONs or BSONs (in MongoDB).

 

So, the PartnerDevice entity will consist of a device model mapped to a ‘reference’ string. There will be a corresponding configuration object mapped to this ‘reference’ string in MongoDB.

 

The ‘reference’ string will be like a foreign key, although across an RDBMS and a NoSQL, since we have conveniently decided to store the actual configuration objects in a NoSQL.

 

Device Model

Starting with the PartnerDevice entity first, let us set up a MySQL connector. Once we have launched the Canvas, we can drop in a MySQL connector on the ‘Backend’ quadrant. We will need to supply the connection parameters (in our case, the database password happens to be ‘deviceadmin’, and the other parameters are visible in the screenshot below).

 

Now we can drop in a Model into the ‘Private’ quadrant. We want to represent a PartnerDevice entity, and we have already created a table simply called ‘Device’ in MySQL. We need to map the fields of this table to fields in the Model in our Canvas, and assign correct data types. The data related to this Model will have to be exposed at a certain context path. By default the context name will be the same as the Model name, but we are free to choose a more user-friendly context name like ‘partnerdevice’.

 

 

Tip: It is a good practice to expand the Model Details and fill in a few more specifics like the plural of the entity we are talking about, whether a data attribute is an index and is required, etc.

For more details feel free to browse through the platform documentation.

 

 

Now comes the crucial step: we need to visually connect the Model to the Back-end.

 

 

The above steps creates a logical data Model and a data store for it, but we can’t yet access the data using RESTful APIs. To do so, we first need to create a Gateway. Let’s just name it ‘DeviceGateway’. For our ‘Device’ model, we need to create a pipeline named ‘DevicePipeline’ with request processing elements. For now, let’s just add a ‘proxy’.

 

 

We should see a status message like ‘Device Gateway successfully deployed’.

 

 

Now one more crucial connecting step: let’s connect the ‘Device’ Model to the ‘DevicePipeline’.

 

Since this is the first time we are connecting the ‘Device’ Model to a Pipeline in our API Gateway, we will be prompted to create a publicly accessible Endpoint tied to this pipeline.  Let’s name this as ‘DeviceApiEndpoint’ . Therein, we can define the URL patterns that we want to be accessed through this endpoint. In this case we want to expose the URLs like ‘/partnerdevice/*’

 

Once we complete the above steps, a large set of REST URLs are automatically exposed, based on the underlying data model. To inspect these URLs, click on the ‘Settings’ button on the top right of the Canvas, and click on the ‘explorer’ URL.

 

 

You can expand the list to check out the REST URLs corresponding to our ‘Device’ Model. Note that the URLs all refer to our chosen context path: ‘/partnerdevice’.

 

 

The Device Model only creates ‘references’ to configuration objects, but we’ll create the configuration objects themselves in MongoDB in a bit.

UserAgent Model

Before that, let ‘s define the ‘UserAgent’ Model representing mobile devices belonging to users. We’ll use the same MySQL back-end. For simplicity, location will consist of latitude and longitude concatenated into a single String. Let the context path for this Model be ‘agent’.

 

 

Correspondingly, we will create a new pipeline called ‘AgentPipeline’ exclusively for the UserAgent Model, with just a proxy element within. This will result in one more Endpoint which we will name ‘UserAgentEndpoint’. It will expose URLs like ‘/useragent/*’.

 

In this example, we will create a separate pipeline in our API Gateway for each Model-based microservice. That way each microservice can be independently configured with a set of pipeline elements (for authentication, authorization, etc) that are most appropriate. This is often the case in many applications, but let us be aware that it is possible to map all our Models to the same Pipeline, if they can all share the same pipeline configuration.

 

 

The auto-generated REST URLs can be inspected as follows:

 

 

 

ConfigObj Model

Finally we want to create a Model for ConfigurationObject entities, and store them in MongoDB. Our platform supports a MongoDB connector as well – off the shelf. We’ll drop it into the ‘Backend’ quadrant, and then configure the database connection. Please note that in a production deployment, access to MongoDB should be protected by a username and password.

 

The fields in our ‘ConfigObj’ Model include a ‘model’, that is a device model, and a configuration string (a binary string, although in practice it will be more complicated, so it makes sense to store in a NoSQL).

 

 

Next we create the ‘DevConfigPipeline’ within our Gateway and the endpoint named ‘ConfigObjApiEndpoint’ exposing URLs like ‘/devconfig/*’.

 

 

There will be a bunch of auto-generated URLs, as follows:

 

 

Functional Use Cases

We just developed the model-based microservices above. The microservices expose model data through REST APIs. But together, they should be able to address the use cases of the overall software system for device provisioning. Let’s consider the key use cases.

 

Use Case 1: If you are a vendor or a partner, you would like to create new configuration objects for your supported device models. This will be a two step process:

Step 1 – Create a new entry for each supported mobile device, using the ‘Device’ Model, which corresponds to a HTTP POST request:

 

Tip: Expand the /partnerdevice POST REST API in the API explorer, and fill in a JSON array consisting of device models and corresponding references. Click on the ‘Try It Yourself’ button to invoke the API. This is like a test environment. Instead you could use curl to invoke the same request.

 

 

You could also inspect the HTTP Status and Response Headers:

 

 

Step 2 – Then, store the corresponding Configuration binaries using the ‘ConfigObj’ Model:

 

 

 

Just in case you are interested in verifying that the Data Model is actually connected to a Data Source, you could run queries against MySQL and MongoDB as follows:

 

Figure: MySQL Query

 

Figure: MongoDB Query

 

Use Case 2: If you are a device user, you would like to register / enroll your device with the Device Provisioning System, so that new configuration settings can be pushed over the air onto our device. This is again a POST request that is already exposed through the ‘UserAgent’ microservice.

 

 

 

Again, we can verify that the data is consistent with the back-end data source.

 

 

Use Case 3: The Device Provisioning System would need APIs to query available configuration settings for a given model. This is again a two step-process:

 

Step 1 – Find a ‘reference’ string from the Device Model. For this purpose, a filter (similar to a where clause in SQL) may be used

 

 

 

Step 2 –  Find the configuration object (bytes) from the ConfigObject Model:

 

 

 

Use Case 4: Let’s just assume the location of a user device (or UserAgent) is of interest to this provisioning system. We may need a way to update the location of a user Device from say Seattle to San Francisco. Here’s the corresponding POST request.

 

 

 

Update requests return a response containing the count of the data records updated:

 

 

 

 

Summary

In this use case we explained how a fairly complicated real-life application can be built on top of the Express Serverless Platform (ESP) offered by LunchBadger, in a matter of  minutes. 

We were able to:

 

  •         Use off-the-shelf connectors to database back-ends (MySQL and MongoDB) from a palette of connectors available on the Canvas. We showed how this platform supports polyglot persistence
  •         Create Model Functions, or simply Models, of each entity in our problem domain. Each Data Model will be exposed as a microservice.
  •         Create an instance of an API Gateway (Express Gateway) and expose REST URLs for accessing the Models. For each model we created a corresponding HTTP request-response pipeline in our API Gateway
  •         Deployed the application onto a Kubernetes infrastructure, and exposed externally accessible URLs for accessing the application

Our design partner was able to realize a 5X time savings utilizing ESP to build out their MDM use case.

Next Steps

 

There are many more nitty-gritty details one would like to take care of when building enterprise applications of this nature. For example:

 

  •         Use API keys or OAuth for secure access to APIs
  •         Configure rate limits for each exposed API
  •         Add more functionality to the data models, probably through user-defined functions that manipulate the data

The platform offered by LunchBadger addresses these concerns and does lot more. Watch out this space for more interesting discussions on the Express Serverless Platform!