By Bruce Bonsall, CISSP – BT, Senior Security Strategist
For the past 25 or 30 years, information systems have become increasingly distributed.
In the early days of computers, paper tape and card readers were used to input instruction sets and data into monolithic ‘big iron’ mainframe computers for processing. The resulting information output typically came in the form of printed reports. The entire computer network was contained inside a locked data centre; you passed the card deck in through a window and sometime later they passed the report out. The network perimeter was very well defined and made of concrete and brick.
Like spreading vines, networks connected computer terminals to the central mainframe computer. As the cost of computers dropped, it became affordable for consumers to own them. It also made it easy for departments within companies to obtain their own servers. Local Area Networks (LANs) bloomed like flowers in the spring and data was forever freed from the confines of the data centre. The mainframe was pronounced dead, albeit a tad prematurely.
LANs, WANs, client/server, dial-up modems and then the internet connected millions of computers to one another and made it possible to access and move data around the world at the speed of light. Due to all this, when it comes to data, the cattle are out of the barn.
Today, people use all sorts of smartphones, tablets and personal computers to access data and interact with information systems. Those devices are often owned by the individual and are not under the control of the corporate entity. Those devices, often referred to as ‘endpoints’, might be secure — and they might not be. Because personally-owned devices are under the control of their individual owners, it is those owners who must be relied upon to implement proper security on the devices.
Ever since people began using their personally-owned devices to connect to company networks, companies have relied on those employees to secure their computers and taken steps to try and verify that those computers are secure. In addition to providing remote users with education and antivirus software, companies, agencies, and institutions have used gateways to check the security health of connecting devices. This Network Access Control (NAC) approach attempts to ensure that all connecting devices are properly configured according to company policy before allowing the connection. The vast diversity of consumer-owned devices makes this very difficult, if not impossible, to achieve.
When the endpoint can’t be controlled directly, or at least effectively audited for proper security configuration, quite simply, it can’t be trusted.
When a company, agency or institution has no control over the end-point device, allowing it to connect directly to confidential or sensitive information is risky and unadvisable. Rather than continue down the impossible path of trying to control millions of endpoints, those charged with protecting information assets looked closer to the data centre in the network chain, and all but gave up on the endpoints.
The growing trend is to push data back into the data centre and bring users to the data with remote access tools and virtual desktops. To continue the cattle metaphor, the cows will stay safely in the barn, and when you want to look at them, you’ll use some remote viewing capability, somewhat like how a web cam gives you visibility to far-off locations. Virtualisation provides that capability and the means to facilitate controlled access to information.
With a few exceptions — such as the sandboxing technique that places a corporate/agency/institution- managed application on the endpoint to contain the corporate data — security teams are leveraging approaches that keep data off the endpoints. In such cases, reputable organisations control access via web-application portals or may use a virtual desktop that acts as a go-between allowing you to see the data — but not move it to the untrustworthy device. In this way, access to valuable information is enabled without putting the data at undue risk.
Using computers that have limited capabilities and no storage capacity is similar to the virtual desktop approach that many companies are taking. The idea with this ‘thin client’ approach is to allow access to information, but limit the ability to transfer it to a vulnerable uncontrolled endpoint.
The thinner (and dumber) a client endpoint is, the less likely the endpoint will expose corporate data. The thinnest of clients can be used when connecting to a virtual desktop or a web application. Thousands of virtual desktops can run on a single server and the people who leverage those desktops can have very thin (a.k.a. — dumb) clients. This is similar to the way mainframe computers supported thousands of dumb terminals or PCs running 3270 emulation software.
Virtual desktops are in use, and will be used increasingly, to facilitate controlled access to information. When data can’t be kept penned up in the data centre, such as when it’s traveling around the world on portable devices, it can be kept in a virtual crate. Sandboxing techniques use encryption to create a container (the crate) that remains under some control of the corporate entity. Password policy can be set and enforced and the entire container can be wiped clean when necessary.
To summarise, virtualisation and sandboxing are two popular and effective ways to reduce exposing information to the risks that accompany mobile users and their powerful information processing devices.
To learn more about concerns over mobility and keeping data safe, read part three in this series on BYOD, virtualisation and mobility, coming soon.