Last Updated on July 30, 2018
More and more organizations are moving data and applications to the public cloud, often because it enables them to “do more with less” while ramping up and innovating more quickly and efficiently. A big selling point for public cloud is how fast you can potentially transition to it.
But that transition from your own servers to the cloud also needs to be secure—or you could suddenly be the victim of a data breach. For example, despite numerous high-profile breaches directly attributed to improper access configuration, a high percentage of public cloud servers remain misconfigured.
Cloud Service Providers (CSPs) are heavily invested in securing their cloud platforms. However, the service consumer remains responsible for securing its environment and activities within the cloud. Widespread and ongoing failure on the part of consumers to establish and manage appropriate controls has let Gartner to predict that, “Through 2022, at least 95% of cloud security failures will be the customer’s fault.”
The challenge for businesses moving applications to the public cloud is to ensure you have controls in place that parallel your on-premise security posture. You still need to cover all the same bases around identity and access management, data encryption, monitoring, compliance, and so on.
But there can be significant differences between on-premise and cloud application architectures that impact security. Cloud-first application servers are inherently transient, for instance; they “exist” only when the application is running. It is even possible to configure “serverless” applications that have no backend server component at all. In scenarios like these, traditional server monitoring doesn’t apply.
Monitoring your systems and applications in the cloud largely requires you to use CPS-specific monitoring data around the use of cloud services. While these data feeds can help you detect unauthorized access or alert you to unexpected behavior or utilization, you need to be prepared to ingest and process them.
This brings up a further cloud challenge: While the overall goal of moving to the public cloud is often to increase efficiency and agility, the actual result can be greater stress and complexity for your IT staff as they embrace this new operating model… while also handling everything they were doing before.
Another concern is that IT is likely to have less control over public cloud-based resources than in-house resources. CSPs make it very easy for business units to provision new services on-demand, very possibly without adequate oversight. This can lead to the proliferation of shadow IT as well as security problems. How can IT secure resources it doesn’t even know about?
Organizations frequently jump to the public cloud without sufficient due diligence, especially around the security controls the CSP is handling and what the customer must provide and how to provide it. This immediately and directly increases cybersecurity risk.
It’s certainly possible to operate even more securely in the public cloud as you do behind your own firewall. But doing so takes planning and know-how.
To discuss options or solve specific InfoSec problems around moving data and applications to the public cloud, contact Pivot Point Security.
For more information:
- Is Your Public Cloud Raining Sensitive Data?
- Tweaking Your TPRM Strategy to Improve Cloud Security
- 5 Top Information Security Accreditations for SaaS Providers
Without good Asset, Patch & Vulnerability management in place, a network penetration test could be a big waste of time and money.
Download the free inforgaphic now!