Month: July 2019

IDG Contributor Network: Is the cloud lulling us into security complacency?

Source: CSO Magazine On:

Read On

The recent CapitalOne breach has certainly made lots of headlines in less than a day since the story broke out. And sadly, it has already thrust the $700M settlement that was reached from the largest ever data breach – the Equifax one – onto the sidelines just days after the news of that settlement broke out.

But going back to CapitalOne, there are lots of lessons to be learned there certainly. I want to focus on where CapitalOne’s data centers were and what that means for the rest of the planet from a security perspective. CapitalOne has been one of the most vocal AWS customers. They have appeared at numerous AWS events and touted how they have completely shuttered all their data centers and run exclusively on Amazon. And to be fair, they have also shared their best practices and use of AWS services.

And then this happens.

So, the question is: if one of the savviest AWS customers can suffer such a large and embarrassing data breach, then every AWS (and non-AWS) customer should be concerned…and taking proactive steps to address what cloud security means and what it does not mean.

Put another way, is reliance on the cloud lulling us into security complacency?

1. From rack and stack to spin up on a whim

In the days when every mid to large enterprise had one or more dedicated data centers and setting up a new server or rack involved wiring, power, cooling as well as extensive network and security reconfigurations. And it could literally take weeks. And that time would allow to ask the basic and not-so-basic security questions. Today, compute, storage, serverless…everything is on-demand. And cloud bursting and data hoarding is inexpensive and quick. Everything has been accelerated many times over. So, unless security is in the process blueprint or the cloud provider offers it as a default setting (e.g. AWS by default now encrypts its S3 buckets after many an incident of unsecured data stores was reported) it can easily get lost in the noise.

2. The shared responsibility model is pretty descriptive except that it can relegate to distant memory

When AWS came up with the Shared Responsibility Model, it got great kudos for explaining clearly what they are responsible for – “security of the cloud” and what the customer is responsible for “security in the cloud.” But the pace at which AWS releases features, it is so easy to get caught up with the catchy names – Greengrass, Lambda, Control Tower – and delve into them without remembering the “of” the cloud and “in” the cloud responsibility distinction. And that oversight can prove to be very costly.

3. No two clouds are the same, and doing multi-cloud requires extra effort and care

While multi-cloud has a bevy of advantages – better pricing, redundancy, bleeding edge feature rollout etc., it also puts a burden on the teams using multi-cloud. The investment to keep up with the latest and greatest and then know how to use it. But from a security perspective the challenge is far greater. Why? Because while inherently the shared responsibility model should apply to all clouds – AWS, Azure, Google, etc. – the implementation and the risk attribution could be very different.

For instance, can an infrastructure administrator gone rogue have the ability to steal a Virtual Machine. Or if a data store is encrypted, does the end customer alone have the master key or does the cloud provider hold the key as well? And the answers could and usually are very different from cloud to cloud. And so, the shared responsibility model is also cloud specific.

These are three challenges as it relates to cloud security that makes this journey not such an obvious one when seen from the lens of security and privacy. So, what does a business do then? Slow down or stop cloud adoption. The answer to that is obvious. No, that ship has sailed.

Instead, ask yourself these questions periodically:

  1. Have I identified recently all the sanctioned and unsanctioned cloud workloads across all major public clouds for my enterprise (there are tools that do this kind of discovery)?
  2. Remind yourself and your team of the “shared responsibility model” and for all the cloud workloads ask what “in the cloud” security means. The answer could be very different for a cloud connected IoT sensor to a serverless compute engine.
  3. And finally, develop experts within your organization or engage a trusted third-party to educate constantly on the multi-cloud differences for the features you are working with from a security and privacy angle. Costly and Time consuming? Yes. Critical? Absolutely.

Over the course of the coming weeks and months, we will learn more about the CapitalOne breach. But to borrow a marketing tagline from them ask yourself this question constantly “What’s in your cloud”?

This article is published as part of the IDG Contributor Network. Want to Join?

IDG Contributor Network: New to autonomous security

Source: CSO Magazine On:

Read On

Autonomy is just another word for automating decisions. And we can make cyber more autonomous. This has been proven in in-depth scientific work in top-tier research venues, a 2016 public demonstration by DARPA (the Defense Advanced Research Projects Agency), and new industry tools.

They’ve all proven that we can replace humans – or at least make them more productive – in cybersecurity by replacing manual human effort with autonomous technology. However, it is important to note that the primary focus of research is to show something “can” be done, not what “should” be done.

What are the parts of a fully cyber-autonomous system? What can you add today to your toolbox to make your cybersecurity program more autonomous? Read on to learn the 4 key components of a cyber-autonomous system, what’s been shown, and what you can do today.

The challenge

In 2014 the Defense Advanced Research Project Agency – DARPA – issued a challenge: can researchers demonstrate that fully autonomous cyber is possible? They dubbed this challenge the “Cyber Grand Challenge.”

DARPA is no neophyte. DARPA funded and led the development of the original internet. Previous grand challenges, such as the autonomous car challenge, have shaped the technology we find in Tesla, Uber, and ArgoAI. They wanted to pursue the same for cybersecurity.

$60 million dollars later, the results are in. Yes, it is possible to do fully autonomous cyber – at least in theory and in DARPA’s defined environment. Participants demonstrated autonomous application security by showing how systems can find vulnerabilities and self-heal from them. (In later posts we will talk about network security.)

DARPA’s purpose for this challenge was not to show an application or system is secure. They found this to be incorrect thinking; cybersecurity is not a binary state of being “secure” or “insecure”. Rather, they posited that security is about moving faster than your adversaries. We must autonomously find new vulnerabilities, fix them, and decide how to move faster than our ever-changing threat landscape.

Just as they had with the autonomous vehicle challenge, the DARPA CGC gave us a glimpse into the future. Each successful participant in the challenge utilized 4 key components in their solution, which suggest the criteria organizations should consider as they aim to add autonomous cyber technology within their toolbox.

The 4 components of an autonomous AppSec program

Autonomous security systems make decisions that were previously left to humans. Decisions such as “is this code vulnerable?” and “should I field this patch?” are questions every security and IT professional must answer on a near-daily basis.

The fully autonomous systems fielded at the DARPA CGC had four main components operating in an autonomous decision loop:

  1. Automatically find new vulnerabilities.
  2. Harden or rewrite applications automatically to prevent them from being exploited.
  3. Measure the business impact of fielding protection measures.
  4. Field any defenses that meet the business impact criteria that helps beat opponents.

The autonomous decision loop continually and continuously went through each component.


The goal of hunt is to find new vulnerabilities before adversaries. There are many technologies today in AppSec. Which are important?

First, a general principle: you want technologies that have low or no false-positives. A false-positive is when the hunt component misidentifies a vulnerability when the code is actually safe in reality. False-positives can be deadly to autonomous AppSec programs. When systems have high false-positive rates, they can neither decide what is a real problem nor accurately justify a fix that costs time or performance.

I group today’s existing technologies into four categories:

  1. Static analysis. Static analysis is like a grammar checker, but for source code. Static analysis looks at the code and tries to flag all possible problems. Sounds great, right? The challenge with static analysis is that it has high false-positives rates. While a valuable technique, its high false-positive rates disqualify SAST as a viable option for an autonomous process and system.
  2. Software Component Analysis (SCA). SCA looks for copies of known vulnerable code by verifying whether or not you are running an outdated copy of a crypto library. SCA typically are accurate, thus good candidates in enterprise. However, I did notice that SCA didn’t perform a large role in DARPA’s CGC. The reason: the CGC applications were all custom written from the ground up and therefore did not use existing components.
  3. Automated known attacks patterns. Tools like Metasploit automate the launch of known attacks. They are key tools in penetration testing and can be highly accurate. However, their shortcoming is that they only check for known vulnerabilities.
  4. Behavior analysis. Behavior analysis includes techniques such as dynamic analysis, fuzzing, and symbolic execution. Similar to SAST, these technologies find new vulnerabilities. Interestingly, behavior analysis was a key component used in every autonomous system in the CGC. Unlike SAST, it does not try to find every known vulnerability in a single pass. Because behavior testing generates a test case to prove the vulnerability can be triggered, every vulnerability reported is actionable.

Certainly, adversaries will try the known attacks first. Thus, my recommendation is to use SCA, if you don’t already. I recommend the same for testing known attacks patterns.

However, if you want to find new vulnerabilities before adversaries, you need to add behavior testing. In a nutshell, behavior testing attempts to guess new inputs that trigger new code paths – ones where a latent undiscovered vulnerability may lie. The output of behavior testing is a test suite, giving users inputs to triggered vulnerabilities as well as a regression test suite.

Behavior testing works. New products are entering the market, enabling organizations to automatically perform behavior analysis on their products. For example, Google’s OSS-Fuzz project has used fuzz testing to find 16,000 actionable bugs in Chrome — all autonomously and with zero false-positives. Microsoft has also used fuzzing for their Office Suite to weed out bugs previously missed by their static analysis tools.


Autonomous protection means changing an application to better defend against identified vulnerabilities or classes of vulnerabilities. There are two classes of techniques for autonomous AppSec protection:

  1. Hardening the runtime of an application so it is resistant against attackers – regardless of the vulnerabilities within them. Examples include industry RASP products.
  2. Patching vulnerabilities by intelligently and automatically rewriting the source code. Automatically patching compiled executable programs (think compiled C/C++) was a major innovation in the CGC. There are no industry products that do this today.

Although auto-hardening and protection technology is new, I believe it is worth evaluating. RASP technology promises to harden applications in a production network. Sadly, auto-patching is still very theoretical and not ready for prime time (yet).


The unsung hero in security are the people who determine whether a patch can be fielded without hurting the business. A 2019 survey showed 52% of DevOps participants found updating dependencies – like those found with SCA – “painful”. I believe this pain can be alleviated with automatic evaluations.

Evaluations should measure risk of updating dependencies by answering the following:

  1. Did the fix prevent the security bug from being exploited?
  2. What was the performance overhead? In CGC, winners used the 5% cutoff – meaning if protection had more than 5% overhead, they did not autonomously field the patch.
  3. Was there any functionality lost with the defense?

Thought there is currently no solution for automatic patch evaluations, users can leverage test suites generated by behavior testing tools to evaluate the criteria above.


The goal of autonomous security is to win, not to prove security. Proving security is an academic exercise. Winning is about acting faster than your adversary. After all, real-life security is a multi-party game, where you and your adversary are both taking and responding to actions.  

The most intimidating step is creating an infrastructure for automatic action. Naturally, people want a human in the loop. I believe humans, though, can be the slowest link. With sound practices for automatic evaluation, organizations are enabled to use scientific data for making high-quality actions. As a starting point, I recommend rolling out standards that allow your team to autonomously roll out a protection measure. For example, a fix that has less than 5% overhead and does not impact functionality is reasonable to autonomously field in many enterprise environments.

The right mindset is critical when adopting autonomous AppSec.

  • Continuously hunt for new vulnerabilities while the application is fielded. Don’t wait until you prove the app is secure before fielding.
  • Use AppSec techniques that have zero false-positives. Don’t be afraid of solutions that may have false negatives. Newer techniques in behavior analysis, like fuzzing, are excellent fits.
  • Investigate tools for automatically hardening or protecting your applications, such as RASP.
  • Use data to make decisions, not subjective human judgement. You do not need new machine learning or AI tools. You just need to automate the policies and procedures that are right for your organization.
  • Cybersecurity is about moving through the find, protect, evaluate, act loop faster than your adversary. It’s not a bit that’s “we’re secure” or “we’re vulnerable.”

This article is published as part of the IDG Contributor Network. Want to Join?

From LinkedIn scraping to Office 365 attacks: How attackers identify your organization's weakest links

Source: CSO Magazine On:

Read On

Attackers use a variety of techniques to infiltrate corporate networks, but one tried and true way it to find out who works for a company and then target phishing attacks to those employees.

Famed hacker Kevin Mitnick reportedly used a paperback edition of the who’s who in Washington business owners to gain more information on local businesses, but these days we all have access to a much better database that exposes much more information: LinkedIn. The social network is often the starting place for determining who is a good target in an organization as well as a source for usernames and email addresses. 

From LinkedIn scraping to Office 365 attacks

As noted in the OSINTframework, there are several tools used by attackers to scrape information from LinkedIn.  Scraping tools such as LinkedInt, ScrapeIn, and Inspy allow the attacker to enumerate email addresses from domains. 

Once the attacker has the email addresses of targeted users, there are a number of techniques attackers can use to infiltrate a network. 

One tool that specifically targets Office 365, office365userenum allows an attacker to  go through a list of possible usernames and then observes the response. Given that many usernames start with the email address, the would-be attacker can first determine email addresses from social locations, and then use those emails to see if there are valid user accounts.  Once the attacker finds valid usernames, he can enumerate a list of valid users who can then be targeted for more attacks.   The tool sends a command to the activesync service, which then responds back with codes that attackers can use to determine if the username exists or not.

How MIT's Fiat Cryptography might make the web more secure

Source: CSO Magazine On:

Read On

One of the most common uses of public-key cryptography is securing data on the move. The process used to produce the code that scrambles that data as it travels over the internet has been labor intensive. That’s changed, however, with a new system developed by MIT researchers for creating that code.

Called Fiat Cryptography, the system automatically generates—and simultaneously verifies—optimized cryptographic algorithms for all hardware platforms, a process previously done by hand.

In a paper presented in May at the IEEE Symposium on Security and Privacy, the researchers laid out the nuts and bolts of their system so anyone can implement it. And the process is already being used by Google to secure communication by its Chrome web browser. “We’ve showed that people don’t have to write this low level cryptographic arithmetic code,” explains Adam Chlipala, the associate professor of computer science who led the research team at MIT’s Computer Science and Artificial Intelligence Laboratory that developed the Fiat Cryptography system.

“We can have one library that can produce all the different special kinds of code that previously had been handcrafted by experts,” he continues. “This can lower the cost of development and dramatically increase your assurance of correct and secure code.”

When testing their system, the researchers found its code could match the performance of the best handwritten code, only the system’s code could be generated much faster. “Automation is an important step forward,” says Rolf von Roessing, a partner and CEO at Forfa Consulting, a data security consultancy in Zug, Switzerland, and vice chair of the board of ISACA, a trade organization for information security professionals. “The results are much more reliable and less error-prone than before,” he adds.

This website uses cookies to ensure you get the best experience on our website.