CSO Magazine

Articles from CSO Magazine

3 ways to improve PC security

Source: CSO Magazine On:

Read On

Companies continue to struggle with ensuring their PCs are protected from malware attacks, data breaches and miscellaneous “bad actor” attacks (e.g. ransomware, identity theft, data exfiltration). A new revelation of a security breach every day seems to come every day.

While there are literally hundreds of add-on security solutions in the marketplace, it’s difficult to know if the PC end-point devices themselves are being maximally protected. Should enterprises expect their PC vendors to secure them? The answer is, yes, and enterprise-class device makers are doing a lot, which is not always apparent to organizations purchasing devices.

Data breaches are very costly. In a recent survey we (J.Gold Associates, LLC) conducted of SMB companies in 16 countries, the average cost of a data breach was $103,000. While this is small by large enterprise standards (which could easily be in the millions), it represents a major impact to companies in the fewer than 100 employee range. In many instances, companies undergoing such a cost would never recover and eventually go out of business.

All organizations, no matter small or large, must focus on preventing security threats. And if you think you’re not affected, our research determined that 70 percent of

Companies continue to struggle with ensuring their PCs are protected from malware attacks, data breaches and miscellaneous “bad actor” attacks (e.g. ransomware, identity theft, data exfiltration). A new revelation of a security breach every day seems to come every day.

While there are literally hundreds of add-on security solutions in the marketplace, it’s difficult to know if the PC end-point devices themselves are being maximally protected. Should enterprises expect their PC vendors to secure them? The answer is, yes, and enterprise-class device makers are doing a lot, which is not always apparent to organizations purchasing devices.

Data breaches are very costly. In a recent survey we (J.Gold Associates, LLC) conducted of SMB companies in 16 countries, the average cost of a data breach was $103,000. While this is small by large enterprise standards (which could easily be in the millions), it represents a major impact to companies in the fewer than 100 employee range. In many instances, companies undergoing such a cost would never recover and eventually go out of business.

All organizations, no matter small or large, must focus on preventing security threats. And if you think you’re not affected, our research determined that 70 percent of companies indicated they’ve had a security breach. The research also showed that there was a major correlation between how likely a PC is to have a breach and how old the machine is. A 2-3-year-old machine had twice as many data breaches or malware attacks as a 1-2-year-old machine, and a PC that was 5-years-old had six times the security incidents as a 1-2 year old machine.

Clearly older is not better. But while age is a major determining factor due to lack of many of the security improvements newer machine incorporate, it’s not enough to just look at acquiring new machines without also understanding what they do to protect. There are three areas where organizations must focus if they are to protect their PCs, and ultimately the entire organization, from security breaches:

  1. Hardware
  2. Software (including the OS and apps)
  3. Services

Below is a brief overview of some things to look for.

Security step 1: Hardware — keep away from consumer grade products

In the hardware space, companies should look to not only what’s available from the CPU vendors powering the machine that enhances security (e.g., Intel’s SGX technology for vaulting critical system level code, or vPro for enterprises with enhanced capabilities), but also specific additions from the PC vendor (e.g., HP’s multi-component Sure Suite of security products including SureStart for BIOS protection and Sure Recovery for OS level protection, or Dell’s SureBoot that protects against BIOS attacks and manipulation).

Further, making sure that the PC includes a hardware “vaulting” system that safeguards against tampering with critical identity and boot components beyond the BIOS (e.g., Dell SureID, HP SureStart) is required. However, most of the security protections listed above are reserved for enterprise class devices (e.g., Dell Latitude, HP Elite Books) so organizations purchasing consumer-level protects will not be able to avail themselves of these increased protections.

My advice is to always deploy enterprise-class PCs. They will cost a little more, but will offer much better security, not to mention their likely higher reliability and longer life.

Security step 2:  Software – turn to machine learning and third-party help

At the software/application level, several components need to be evaluated and employed. One of the main infiltration points for security incidents is through browser attacks. To this end HP implemented its SureClick technology, which essentially creates a virtual machine for each instance of a browser, so that no malicious code can be transferred to the core machine systems. But equally as critical is the need to monitor the actions inherent in the operations of the PC, especially when dealing with files and/or email attachments.

A machine learning (ML) approach, such as Dell’s Endpoint Security Suite (through a partnership with Cylance) and HP’s Sure Sense monitors the operations and looks for any anomalous behaviors. Once it detects a problem, the system, which is continuously learning, not only for the machine it’s installed on, but from analyzing data from millions of other machines in the cloud and applying that to the protection analysis, can shut down any malicious activity.

There are, of course, many additional components from third-parties that can be applied to prevent attacks. Traditional antivirus suites (e.g., Symantec, McAfee) can improve overall security protection and fill a void that ML systems may not be able to protect against (e.g., older style signature-based systems are more appropriate to detecting and eliminating older style viruses that are still prevalent today). PC vendor systems, while good, are only one component of an overall corporate security strategy and architecture necessary to fully protect the organization.

Security step 3: Services ideal for business with limited security expertise

Finally, the PC vendors also offer a continuous security services monitoring and prevention capability (e.g., Dell Data Guardian services and a new relationship with CrowdStrike and HP Desktop as a Service). These services are most useful for those companies that do not have an extensive security organization and/or as a component of an increasingly attractive PC lifecycle leasing and management plan.

Security services are growing in popularity especially as the threats expand and many companies no longer want to devote the substantial resources necessary to manage endpoints. Your organizations should evaluate these additional cost-effective services especially if you have limited security expertise in house.

Bottom line: Enterprises large and small must focus on protecting their most important assets – their data. Breaches are costly and can cause major disruptions in employee productivity and customer loyalty. It’s a smart investment to look at acquiring PCs that offer the maximum protections from enterprise class vendors.

The additional cost will be more than outweighed by the elimination of potential data breaches and malware attacks, offering a significant return on investment. And those companies not confident in their own security resources would do well to employ the professional monitoring and management services offered by the vendors.

IDG Contributor Network: Lessons learned through 15 years of SDL at work

Source: CSO Magazine On:

Read On

Do a quick search on secure development and you’ll find pages and pages of advice and best practices. You could relatively quickly create a long checklist of best practices and how-tos covering everything from how to create a threat model to the dos and don’ts of avoiding cross site-scripting mistakes. Newer articles and papers might focus in on applying secure development to mobile applications or making it work in a DevOps environment as the way we have built code has changed over the years.

And yet, despite all we know about creating secure code, many organizations still struggle with making secure development work in practice.

In fact, it’s been a little more than 15 years since Microsoft mandated its Security Development Lifecycle (SDL) in summer 2004. This was two years after the now-famous Trustworthy Computing memo went out and Microsoft began its unprecedented investment of resources into secure software development. But 2004 is a milestone that retains special significance for me and one that holds a lesson often lost in the pages and pages of how-tos and best practices – and has been the key to the SDL’s longevity in a rapidly evolving technology climate – it’s all about the developers. We discovered quickly that focusing on the practices without consideration for how software development is actually done or what developers need to be successful was an effort destined to fail.

Rather, we had to start with an understanding of the development process and then define how security would work within that framework. SDL was created to support the developer in the creation of more secure software resulting in more secure products and more satisfied end-users.

We found that this approach worked. After 15 years, and despite the massive changes in both how technology is used and how software is created, SDL still provides the foundation for the software security programs at many of the world’s largest and most influential software organizations and there are no signs of its use or adoption slowing down.  Of course, there have been modifications and changes to the original parameters set years ago at Microsoft.  But this is the beauty of SDL: it was designed to evolve.

Nothing remains the same – 15 years of evolution and change

The three most significant changes to SDL over the last decade and half can be summarized as follows: 

  • Diversity
  • Speed
  • Supply chain

Originally, SDL as we created it at Microsoft aimed at development in C, C++, and C#.  Today, organizations may use 10-15 languages resulting in more work and complexity for SDL teams (but not for developers – they are using the language they’re using, and they just need to write secure code in that language).  So, diversity of languages has evolved over time.

Speed is another area of significant change for SDL. SDL adapted well to Agile or DevOps but not without some initial pushback from developers. It was this feedback that helped improve the functionality of SDL by focusing on integration of secure development tools; automating verification wherever possible; and, determining that some requirements could be met post-release.  Getting a secure product out the door in a timely way remains one of the main benefits SDL brings to the development process. 

Supply chain is the third area experiencing significant change.  It is probably where the most changes have happened as the use of “third party code” is the rule today – especially open source software. The use of “third party code” creates another layer to the SDL process.  When using “third party code” someone in the organization had better be responsible, have an inventory, have a standard for acceptance and use, and have a system for detecting security problems in code created outside the organization. And most importantly, be prepared to respond when needed.

The security is done when the product is done: Lessons learned from 15 years of SDL

Below are five lessons learned from implementing and managing SDL environments since its inception:

1. Developers want to do the right thing

They have pride in their company, products and work and know that security matters. Implemented correctly, and with their needs as its focus, SDL helps them achieve their goals.

2. The security must be built into the software development process, not merely tested after the product is complete

The SDL culture of “the security is done when the product is done” is the only way to ship secure code and helps avoid conflicts between business goals and security.

3. SDL is a product group activity advised by the SDL team

Product teams must buy into the credibility of the SDL team and importance of security for the benefits of SDL to take root and flourish in an organization. Culture matters.

4. Continuous improvement is central to SDL’s success

New classes of vulnerabilities are nature’s way of telling you to update your tools and/or processes.

5. Training is good, but tools are better because people forget – code doesn’t

The SDL was created to work in tandem with security tools. Focusing on the developers doesn’t mean keeping things manual – rather organizations should constantly consider how technology and automation can support their developers in achieving their security goals.

For those who have been working on software security for a while now – none of this is earth shattering. We know that at the end of the day, it is all about empowering the developers. But perhaps it is because many of us are engineers or technologists at heart, we sometimes lose focus and get caught up in the practices and tools and start to treat SDL as a checklist rather than a process. So every once in a while, it is important to take a step back and remember where we came from and what we’ve learned along the way, and make sure we always remember that it is all about the developers.

Check out SAFCode’s recent Security Champions paper to learn more about the “people” side of software security.

This article is published as part of the IDG Contributor Network. Want to Join?

Senator Warner seeks “grand alliance” to protect against surveillance threat from China’s tech dominance

Source: CSO Magazine On:

Read On

When it comes to technology policy, Senator Mark Warner (D-VA), Vice Chairman of the Senate Intelligence committee, is clearly concerned about the power China holds, particularly when it comes to trusting China’s leading tech suppliers and the prospect of a China-dominated build-out of global 5G networks. “My beef is with the presidency, the Communist party. It is not with the Chinese people. I have no interest in trying to go back to some cold war bifurcated world, us against China,” the former telecom entrepreneur said during a panel discussion at the Cybersecurity and Infrastructure Security Agency’s (CISA) second annual Cybersecurity Summit this week.

“I would argue that the Chinese people don’t want this regime as well. Look at what is happening in the streets of Hong Kong,” he said. “The kind of surveillance state that China is using in terms of their tech companies would make George Orwell’s 1984 look simple.”

The cybersecurity and surveillance threat that China poses dominates Warner’s thinking about the country along with the kinds of industrial policies the government has adopted to capture telecom and technology markets around the globe. Controversial tech giant Huawei in particular concerns Warner given the fear that Huawei builds backdoors into its gear to spy on rivals.

“You know when people say, well, show me the [spyware] backdoor in the Huawei equipment today, that’s not the issue. There are challenges with the Huawei equipment today,” Warner said on the same day the Chinese company argued in the US District Court in the Eastern District of Texas that a law barring it from doing business with US government agencies is unconstitutional.

5G networks a channel for downstream malware

“But the notion of a 5G network, which is a less central switch and [has] more software at the edge means that as you put in a 5G network… you’re going to have so many daily updates, software updates. Even if the equipment is safe today, you cannot prevent the fact that the Chinese government and communist party can say to Huawei six months from now, ‘make sure you send downstream malware.’”

Aside from the cybersecurity threat, Warner worries about the geopolitical balance of power in the technology arena, where the US once held, if not complete control, then strong political sway over how global networks were constructed. “We have never had a threat of this type,” he argued. The Soviet Union, even at the height of the cold war, never had a technology advantage. “And if they did have technology they didn’t try to market to the rest of the world.”

In the cyber domain, 5G, quantum technology and other cutting-edge technology arenas, “China’s goal is to be the technology leader, to set the rules, protocols and standards. And they have taken this kind of technology development and they are trying to export it,” said Warner. “Most of the technological innovation that’s been created in the last six to seven decades was either made in America or in the rest of the West writ large. Even if it was created outside of America, the United States, by being the largest nation, usually set the standards and protocols, the rules of engagement and the rest of the world kind of went along with that basic concept.”

Concern that Trump will “give up” on Huawei

Later, in Q and A with reporters, Warner had some surprising kudos for how Donald Trump has treated Huawei and China in general. “I give Trump credit for taking on [China] in a different way,” he said during the panel discussion.  He’s worried, however, that Trump’s political objectives might override the economic and counterintelligence realities regarding Huawei if and when he ever resolves his trade war with China.

Warner fears that Trump might “give up on Huawei because he wants to sell $10 billion worth of soybeans and that will completely undermine the last year and a half of what the American intelligence community, the defense community and diplomatic community has done in trying to tell our allies” about the surveillance threat they believe Huawei poses.

CISA helping US telcos move away from Huawei

In the meantime, two recent actions by the administration have left cash-strapped small and rural phone companies scrambling to “rip and replace” Huawei equipment. A White House executive order has banned Chinese telecom companies, including Huawei, from selling equipment to US companies. The Department of Commerce has added Huawei to an “entity list” which restricts how US companies can engage in commerce with certain foreign entities.

CISA, for its part, has been active in trying to help the small telcos transition away from Huawei gear in their networks. “The rural carriers perhaps were a bit caught off guard and in some cases are pretty lousy with some Chinese componentry,” CISA’s Director Christopher Krebs said during the panel discussion. One chief objective is to increase the awareness of the small phone companies so they understand the risk of that componentry, to increase the baseline awareness that “if you’re in any of the targeted strategic sectors that China’s interested in, then you are a target,” Krebs said.

Warner has spearheaded legislation, the United States 5G Leadership Act of 2019, that would give the rural telcos up to $700 million to rip and replace the Huawei technology in their networks.  Later during Q and A with reporters, Warner indicated that the $700 million might not be enough to achieve the goal of getting small carriers to replace their Chinese gear. “Is that enough? You know, I don’t think necessarily.”

Warner optimistic on privacy legislation

China is not the only technology policy priority weighing on Warner. Social media giant Facebook’s ability to protect users’ data privacy is also one of his top concerns. Talking about a dinner he organized the night before with Facebook’s CEO Mark Zuckerberg at Facebook’s request, Warner said “I think we need to put some guardrails around social media, things like interoperability and data portability, things like trying to recognize that we ought to know what data is being collected on us, what its value is, questions around identity authentication.”

Although Warner said he didn’t think it was constructive to go into specifics about what Zuckerberg discussed during the dinner, the Facebook CEO did say “the right things, that they would welcome an appropriate regulation” that was more than just “lip service.” During Q and A with reporters afterwards he added “I came away from the dinner, more optimistic that we can we can get to legislative solutions. There was a lot more openness, than I’ve heard before”

What is OAuth? How the open authorization framework works

Source: CSO Magazine On:

Read On

Since the beginning of distributed personal computer networks, one of the toughest computer security nuts to crack has been to provide a seamless, single sign-on (SSO) access experience among multiple computers, each of which require unrelated logon accounts to access their services and content. Although still not fully realized across the entire internet, myriad, completely unrelated websites can now be accessed using a single physical sign-on. You can use your password, phone, digital certificate, biometric identity, two-factor authentication (2FA) or multi-factor authentication (MFA) SSO solution to log onto one place, and not have to put in another access credential all day to access a bunch of others. We have OAuth to thank for much of it.

OAuth definition

OAuth is an open-standard authorization protocol or framework that describes how unrelated servers and services can safely allow authenticated access to their assets without actually sharing the initial, related, single logon credential. In authentication parlance, this is known as secure, third-party, user-agent, delegated authorization.

OAuth history

Created and strongly supported from the start by Twitter, Google and other companies, OAuth was released as an open standard in 2010 as RFC 5849, and quickly became widely adopted. Over the next two years, it underwent substantial revision, and version 2.0 of OAuth, was released in 2012 as RFC 6749. Even though version 2.0 was widely criticized for multiple reasons covered below, it gained even more popularity. Today, you can add Amazon, Facebook, Instagram, LinkedIn, Microsoft, Netflix, Paypal and a list of other internet who’s-whos as adopters.

OAuth examples

The simplest example of OAuth is when you go to log onto a website and it offers one or more opportunities to log on using another website’s/service’s logon. You then click on the button linked to the other website, the other website authenticates you, and the website you were originally connecting to logs you on itself afterward using permission gained from the second website.

Another common example OAuth scenario could be a user sending cloud-stored files to another user via email, when the cloud storage and email systems are otherwise unrelated other than supporting the OAuth framework (e.g., Google Gmail and Microsoft OneDrive). When the end-user attaches the files to their email and browses to select the files to attach, OAuth could be used behind the scenes to allow the email system to seamlessly authenticate and browse to the protected files without requiring a second logon to the file storage system. Another example, one given in the OAuth 2.0 RFC, is an end-user using a third-party printing service to print picture files stored on an unrelated web server.

In all cases, two or more services are being used for one transaction by the end-user, and every end-user would appreciate not being asked to log in a second time for what they feel is a single transaction. For OAuth to work, the end-user’s client software (e.g., a browser), the services involved and authentication provider must support the right version of OAuth (1.0 versus 2.0).

OAuth explained

When trying to understand OAuth, it can be helpful to remember that OAuth scenarios almost always represent two unrelated sites or services trying to accomplish something on behalf of users or their software. All three have to work together involving multiple approvals for the completed transaction to get authorized.

It is also helpful to remember that OAuth is about authorization in particular and not directly about authentication. Authentication is the process of a user/subject proving its ownership of a presented identity, by providing a password or some other uniquely owned or presented factor. Authorization is the process of letting a subject access resources after a successful authentication, oftentimes somewhere else. Many people think that OAuth stands for open authentication, but it’s more helpful to understand it by thinking about it as open AUTHorization.

An early implementer describes OAuth as similar to a car’s valet key, which can be used to allow a valet to temporarily drive and park a car, but it doesn’t allow the holder full, unlimited access like a regular key. Instead the car can only be driven a few miles, can’t access the trunk or locked glove box, and can have many other limitations. OAuth essentially allows the user, via an authentication provider that they have previously successfully authenticated with, to give another website/service a limited access authentication token for authorization to additional resources.

Additionally, OAuth 2.0 is a framework, not a protocol (like version 1.0). It would be like all the car manufacturers agreeing on how valets would automatically request, receive and use valet keys, and how those valet keys would generally look. What the valet keys could do as compared to the full function keys would be up to each car manufacturer. Just like in real life, valets and car owners don’t need to care about how it all works. They just want it all to work seamlessly as possible when they hand off the key.

How OAuth works

Let’s assume a user has already signed into one website or service (OAuth only works using HTTPS). The user then initiates a feature/transaction that needs to access another unrelated site or service. The following happens (greatly simplified):

  1. The first website connects to the second website on behalf of the user, using OAuth, providing the user’s verified identity.
  2. The second site generates a one-time token and a one-time secret unique to the transaction and parties involved.
  3. The first site gives this token and secret to the initiating user’s client software.
  4. The client’s software presents the request token and secret to their authorization provider (which may or may not be the second site).
  5. If not already authenticated to the authorization provider, the client may be asked to authenticate. After authentication, the client is asked to approve the authorization transaction to the second website.
  6. The user approves (or their software silently approves) a particular transaction type at the first website.
  7. The user is given an approved access token (notice it’s no longer a request token).
  8. The user gives the approved access token to the first website.
  9. The first website gives the access token to the second website as proof of authentication on behalf of the user.
  10. The second website lets the first website access their site on behalf of the user.
  11. The user sees a successfully completed transaction occurring.
  12. OAuth is not the first authentication/authorization system to work this way on behalf of the end-user. In fact, many authentication systems, notably Kerberos, work similarly. What is special about OAuth is its ability to work across the web and its wide adoption. It succeeded with adoption rates where previous attempts failed (for various reasons).

Although not as simple as it could be, web coders seem to readily understand the involved transactions. Making a website OAuth-compatible can be done in a few hours to a day (much faster if you’ve done it before). For a little bit of extra effort, authenticated website access can be extended to literally hundreds of millions of additional users. There’s no need for a website to contain its own authentication system with the ability to scale to gigantic proportions. You can find an example of an individual HTTP transaction packet here.

OAuth vs. OpenID

There are a couple of other security technologies that you might hear about in the same context as OAuth, and one of them is OpenID. At a base level, the distinction between the two is simple to grasp. Remember when we said up above that the auth in OAuth stood for authorization, not authentication? Well, OpenID is about authentication: as a commenter on StackOverflow pithily put it: “OpenID is for humans logging into machines, OAuth is for machines logging into machines on behalf of humans.”

OpenID began life in 2005 as a means for logging into the then-popular LiveJournal blogging site but quickly spread to other sites. The idea, in the early days of Web 2.0, was that rather than having multiple logins for multiple websites, OpenID would serve as a single sign-in, vouching for the identities of users. But in practice OpenID was difficult to implement on the developer side, and never really became that appealing to users, especially as there was competition in that space. By 2011, OpenID had become an also-ran, and, Wired declared that “The main reason no one uses OpenID is because Facebook Connect does the same thing and does it better. Everyone knows what Facebook is and it’s much easier to understand that Facebook is handling your identity than some vague, unrecognized thing called OpenID.” (Facebook Connect turned out to not be a world-beater either, but at least people knew what Facebook was.)

That’s not quite the end of the story, though. In 2014, OpenID Connect was released, which reinvented OpenID as an authentication layer for OAuth. In this space, OpenID has found a niche, and the two technologies now complement each other in many implementations.

OAuth vs. SAML

The Security Assertion Markup Language, or SAML, is another technology you’ll hear talked about in the same breath as OAuth. Strictly speaking, the name SAML refers to an XML variant language, but the term can also cover various protocol messages and profiles that make up part of the open SAML standard. SAML describes a framework that allows one computer to perform both authentication and authorization on behalf of one or more other computers, unlike OAuth, which requires an additional layer like OpenID Connect to perform authentication. SAML can provide single sign-on functionality on its own.

SAML is older than OAuth, and indeed one of the driving factors behind OAuth’s creation was that XML protocols like SAML began falling out of vogue; OAuth uses the lighter weight JSON for encoding data, and thus has better support for mobile devices. In practice, SAML is more often used for enterprise applications — Salesforce uses it for single sign-on, for instance — whereas OAuth is more often in use on the open internet.


There are no perfect universal internet-wide authentication standards. OAuth is particularly maligned because of the drastic changes between versions 1.0 and 2.0. In many ways, OAuth2 is less secure, more complex and less prescriptive than version 1.0. Version 2.0 creators focused on making OAuth more interoperable and flexible between sites and devices. They also introduced the concept of token expiration, which did not exist in version 1.0. Regardless of the intent, many of the original founders and supporters threw up their hands and did not support version 2.0.

The changes are so significant that version 2.0 is not compatible with version 1.0, and even different implementations of version 2.0 may not work seamlessly with each other. However, nothing prevents a website from supporting both 1.0 and 2.0, although the 2.0 creators released it with the intent of all websites completely replacing version 1.0.

One of the biggest criticisms of OAuth 2.0 is that the standard intentionally does not directly define or support encryption, signature, client verification or channel binding (tying a particular session or transaction to a particular client and server). Instead, OAuth expects implementers to use an outside protection protocol like Transport Layer Security (TLS), to provide those features.

Is OAuth safe?

TLS can provide all those protections, but it’s up to the implementers, on all sides, to require it to be used. Coders and users should look to ensure that OAuth is running inside of TLS protection. Developers can implement code to enforce TLS use and users should be aware that TLS is being used whenever they are begin asked to input authentication credentials (just like they should anytime they are entering in credentials).

Because of the lack of inherent security binding, it’s possible for a rogue website to phish a user’s legitimate credentials during the part of the process where the user is being required to authenticate themselves to the authorization provider. For example, a user is using the first service and chooses a feature that forces an OAuth transaction to a second service. It’s possible for the first website to fake the second website, where user authentication is often taking place. The rogue website can then collect the user’s authentication credentials and react as if the OAuth transaction had successfully taken place.

This is not just a theoretical threat. In the second quarter of 2017, a million Google accounts were successfully phished. The defense is for users to ensure they are entering their credentials in the legitimate second website’s domain if prompted for credentials, and to avoid nebulous first websites. There is no perfectly safe, universally accepted, SSO that works on all websites, but with OAuth, we’re getting closer.

More on single sign on and identity management:

BrandPost: The CISO must be included in any SD-WAN discussion

Source: CSO Magazine On:

Read On

Extending advanced services to the WAN Edge of the network can have a serious impact on a security architecture and strategy. News cycles are filled with stories about critical network breaches that began by taking advantage of some neglected element of the network, whether by exploiting a vulnerable IoT device or by hijacking some wireless access point at a remote retail location.

Those stories are almost always the result of an organization failing to have a single, consistent security strategy that can shine a light into every corner of the network. Which is exactly why organizations cannot wait until the analysis and selection of an SD-WAN solution has been completed before asking the security team how they should go about adding protections to this new solution.

When CISOs are engaged in the selection of a Secure SD-WAN solution, they not only enable their organization to build a robust WAN edge, but they can also ensure that those connections don’t become the weak link in the security chain.

Security needs to be part of the SD-WAN strategy

What’s needed is a Secure SD-WAN solution that deeply integrates network connectivity functions with advanced security so that they function as a single, integrated system. The CISO and security team are uniquely qualified to not only provide critical analysis of the security capabilities inherent in any solutions under consideration, but also weigh in on the compatibility with security deployed across the rest of the network. When done properly, an SD-WAN solution should enable security teams to extend existing security strategies to the WAN Edge through the SD-WAN solution, rather than trying to wedge a new security solution into an existing security framework.

For example, a Secure SD-WAN solution, especially one that includes direct internet access, needs to ensure that all connections are automatically secured. This requires the implementation of an NGFW, not as a separate appliance, but as a fully integrated solution so that networking and security functionality are seamlessly integrated together.  

Likewise, web applications not only need to be identified and given appropriate connectivity status, such as QoS or weighted queueing, but things like cloud access security brokers (CASB) need to be included to provide in-cloud application assessments and to ensure authorized access to SaaS connections. This helps maintain the integrity of web applications and related data while also preventing the introduction of shadow IT.

And rather than requiring the security team to bolt on security after the fact, a true Secure SD-WAN solution should include a full range of security tools right out of the box that can ensure ultimate WAN edge security. This should start with an NGFW-based appliance that includes full SD-WAN functionality along with all necessary security functions – including IPS, anti-virus/anti-malware, and web filtering, as well as seamless integration with cloud-based services such as web application firewalls, sandboxing, and CASB – as part of a single, fully integrated solution.

In addition, and perhaps most importantly, all of these elements – both the advanced networking functionality and the defense-in-depth security – need to be able to be managed through a single management portal. This enables administrators to see the entire WAN as a single system to see and trouble shoot issues, combined with granular controls that automatically tie WAN connectivity to security functions.

Secure SD-WAN needs to be part of the end-to-end Security Fabric strategy

One of the biggest challenges that security teams face in today’s rapidly expanding IT infrastructure is keeping track of all of the new edges being created by IT teams. It can become impossible to keep pace with digital transformation demands if security teams are constantly forced to try and apply security solutions after the fact. IoT, mobile users, IT/OT integration, hybrid multi-cloud, and the WAN edge are all being introduced in some way or another across most organizations.

When new network elements are created in an ad hoc manner, such as adding SD-WAN, and  the central security team is not included in the architectural discussions from day zero, organizations end up with a hodgepodge of often mismatched security solutions that came with the chosen solution by default. As a result, this new service or solution may not be able to share and correlate essential threat intelligence, enable identical policy enforcement, or even provide consistent functionality with the rest of the security infrastructure. Far too often, by the time the security team is engaged, IT has already introduced critical security gaps into the network that can be expensive and time-consuming to overcome.

This challenge is precisely what a fabric-based architectural strategy was designed to address. With a master strategy in place, each security component is selected based on its ability to provide consistent functionality and enforcement, regardless of form factor (hardware, VM, or cloud), wherever it is deployed. They also need to run on the broadest array of public and private cloud environments possible to give the organization maximum flexibility for building and deploying whatever combination of networked environments is needed. This also ensures that interoperability is fast and easy to establish regardless of how and where organizations decide to expand their networks.

To help with this process, fabric connectors need to ensure that policies and protocols are translated seamlessly and accurately as they move between platforms. This allows each element to interoperate seamlessly to ensure critical threat collection and correlation. A threat detected in one place should be automatically shared across the entire distributed network to trigger a coordinated response.

Just as importantly, these solutions must be designed to function natively in whatever place they are deployed to maximize the use of local APIs and controls. And each of these components needs to also have been optimized to provide maximum performance so that security never interferes with business functions. This can only happen effectively if the CISO and security team are part of the discussion from the onset.

Make Sure You are Part of the SD-WAN Selection Process

From a security standpoint, extending digital transformation efforts to the WAN should be no different than adding new capacity or resources to any other part of the network. SD-WAN connections need to be a natural and seamless extension of the larger security strategy, and with as little overhead and cost as possible. And to make that happen, the CISO needs to be part of the broader IT planning and strategy process.

To achieve this, IT teams may need to be educated – and reeducated – on the need to strictly follow the corporate security fabric strategy. This includes adding the CISO to early strategy meetings where new networking ecosystems are being considered, and engaging with the security team from the very first planning sessions. When done properly, the organization will not only save money and manpower upfront, but perhaps save itself from serious damage later due to flaws inherent in an after-the-fact security implementation.

Fortinet’s Secure SD-WAN solution includes best-of-breed next-generation firewall (NGFW) security, SD-WAN, advanced routing, and WAN optimization capabilities, delivering a security-driven networking WAN edge transformation in a unified offering.

Arcadia Power Can Help You Go Green & Lower Your Power Bill

Source: CSO Magazine On:

Read On

We only have one planet, and using clean, renewable energy resources is perhaps the easiest way to preserve and maintain our future. Luckily, clean energy farms generate far more power than ever before, so whether you want to ensure a cleaner tomorrow, or if you just want to save money on your power bill, you can do so with Arcadia Power. 

Arcadia Power is a platform that makes it easy for homeowners and renters to choose renewable energy. All you have to do is sign up with Arcadia Power and connect your utility bill. It’ll hunt down ways to connect you to clean energy farms near you. Best of all, you might save money on your utility bill if clean energy is cheaper in your area. And did we mention that signing up with Arcadia Power is free? 

Arcadia Power is available nationwide for anyone who pays a utility bill. Find out how Arcadia Power can save you money by switching to green energy, and earn a $20 Amazon gift card just by signing up. 

10 signs you're being socially engineered

Source: CSO Magazine On:

Read On

Together, phishing and social engineering are by far the number one root-cause attack vector, and they have been around nearly since computers themselves were invented.

In the early 1980s, before the internet was the internet, I came across a text file that was named “HowtoGetAFreeHSTModem.” Back in the day, screaming fast, U.S. Robotic HST 9600-baud (!!) modems were highly coveted. I quickly opened the text file. It read, “Steal One!!”. “What a jerk,” I thought. Then I hit the escape key to close the text file.

The plaintext file contained invisible ANSI control codes that remapped my keyboard so that the next key I hit formatted my hard drive. Since then I’ve learned two things: One, if hackers can use text files to attack you, any digital content can be used. Two, anyone can be tricked by appropriately placed and messaged social engineering.

With that said, here are 10 signs of social engineering:

1. Asking for logon information

Secrets of latest Smominru botnet variant revealed in new attack

Source: CSO Magazine On:

Read On

The latest iteration of Smominru, a cryptomining botnet with worming capabilities, has compromised over 4,900 enterprise networks worldwide in August. The majority of the affected machines were small servers and were running Windows Server 2008 or Windows 7.

Smominru is a botnet that dates back to 2017 and its variants have also been known under other names, including Hexmen and Mykings. It is known for the large number of payloads that it delivers, including credential theft scripts, backdoors, Trojans and a cryptocurrency miner.

The latest variant of Smominru, which was documented by researchers from Carbon Black in August, uses several methods of propagation, including the EternalBlue exploit that has been used in the past by ransomware worms like NotPetya and WannaCry and which has been known and patched since 2017. The botnet also uses brute-force and credential stuffing attacks on various protocols including MS-SQL, RDP and Telnet to gain access to new machines.

Recently, researchers from security firm Guardicore gained access to one of Smominru’s core command-and-control servers that stored victim details and credentials. This allowed them to gather information about the compromised machines and networks and assess the botnet’s impact.

The data revealed that Smominru infected around 90,000 machines from more than 4,900 networks worldwide, at an infection rate of 4,700 machines per day. Many of the networks had dozens of compromised machines.

The countries with the largest number of infected computers were China, Taiwan, Russia, Brazil and the US. The Smominru attacks do not target specific organizations or industries, but US victims included higher-education institutions, medical firms and even cybersecurity companies, according to Guardicore.

Over half of the infected machines (55%) were running Windows Server 2008 and around a third were running Windows 7 (30%). This is interesting because these versions of Windows are still supported by Microsoft and receive security updates.

With the EternalBlue exploit, the expectation would be that machines running older and end-of-life versions of Windows would be more affected. However, it’s unclear how many systems were compromised through EternalBlue and how many were infected because of weak credentials.

Attack aided by unpatched systems

“Unpatched systems allow the campaign to infect countless machines worldwide and propagate inside internal networks,” the Guardicore researchers said in a report released Wednesday. “Thus, it is crucial that operating systems be aligned with the currently available software updates. However, patching is never as simple as stated. Therefore, it is of high importance to apply additional security measures in the data center or the organization. Network microsegmentation detection of possibly malicious internet traffic as well as limiting internet-exposed servers are all critical to maintaining a strong security posture.

The poor security posture of many networks is also reflected by the fact that one in four victims were reinfected by Smominru. This means many organizations attempted to clean the infections but failed to properly close all attack vectors and address the root cause.

Most of the compromised machines had one to four CPU cores, falling in the small server category. However, over 200 of them had over eight cores and one machine had 32 CPU cores.

“Unfortunately, this demonstrates that while many companies spend money on expensive hardware, they are not taking basic security measures, such as patching their running operating system,” the researchers said.

A serious infection with multiple payloads

Because of the botnet’s worming capabilities any machine infected with Smominru can be a serious threat to a corporate network, and it’s not just about cryptomining. This threat deploys a large number of payloads and creates many backdoors on infected systems to maintain persistence, including new administrative users, scheduled tasks, Windows Management Instrumentation (WMI) objects, start-up services and a master boot record (MBR) rootkit.

According to Guardicore’s analysis, Smominru downloads and executes almost 20 distinct scripts and binary payloads. The company has published a detailed list of indicators of compromise, which includes file hashes, server IP addresses, usernames, registry keys and more, as well as a Powershell script to detect infected machines.

Misconfigured WS-Discovery in devices enable massive DDoS amplification

Source: CSO Magazine On:

Read On

Hundreds of thousands of devices can be abused to amplify distributed denial-of-sevice (DDoS) attacks because they are misconfigured to listen and respond to WS-Discovery protocol requests over the internet. Web Services Dynamic Discovery (WS-Discovery or WSD) is an UDP-based communications protocol used to automatically discover web-based services inside networks. It’s been used by printers, cameras and other types of devices for over a decade, including by various Windows features starting with Windows Vista.

Most automated service discovery and configuration protocols, including UPnP (Universal Plug and Play), SSDP (Simple Service Discovery Protocol), Simple Network Management Protocol (SNMP) and WSD were designed for use on local networks. However, many devices come with insecure implementations that expose these protocols to the internet, allowing for attackers to abuse them in DDoS reflection and amplification attacks.

What is DDoS reflection?

Unlike TCP, UDP does not perform any IP source validation, which makes most UDP-based protocols vulnerable to IP spoofing by default. In turn, this allows attackers to hide the source of DDoS traffic by “reflecting” it through machines that respond over such protocols.

The way DDoS reflection works is this: From machines under their control, attackers send queries to other servers over an UDP-based protocol and set the source IP address inside packets to be the IP address of their intended victim. This causes the queried servers to send their responses to the victim, instead of back to the attackers’ machines.

DDoS reflection is particularly powerful when the generated responses are larger than the original requests, because it allows attackers to amplify their available bandwidth. For example, an attacker with control over ten machines can send requests to 100 devices with a vulnerable UDP-based service exposed to the internet. In turn, those devices send large responses to the victim due to IP spoofing, so the victim receives a larger number of malicious packets from 100 neutral machines instead of the ten the attacker controls.

WSD is a serious threat

In a new report published today, researchers from Akamai warn that attackers have already started abusing WSD as a DDoS amplification technique and are ramping up their attacks. In one case, an Akamai customer from the gaming industry was hit with a WSD flood that peaked at 35 Gbps.

“Additional research into WSD protocol implementations on devices across the Internet raised grave concerns, since the SIRT [Security Intelligence Response Team] was able to achieve amplification rates of up to 15,300% of the original byte size,” the Akamai researchers said in their report. “This places WSD in fourth place on the DDoS attacks leaderboard for highest reflected amplification factor.”

Akamai’s SIRT studied the WSD protocol as well as various implementations found in devices and discovered ways for attackers to significantly reduce their initial request payloads to trigger responses with huge amplification factors. For example, a standard WSD probe is 783 bytes, but Akamai’s researchers managed to reduce it to 170 bytes and still trigger a valid WSD response of 3,445 bytes.

They didn’t stop there. It turns out that it’s more profitable for attackers to send malformed payloads that would trigger WSD errors. These error responses are not as large as valid probe responses, but there are methods to enlarge them and the requests that trigger them are significantly smaller than valid probes — 29 and even 18 bytes for some vulnerable implementations found in around 2,151 devices from a certain manufacturer.

While the pool of devices that can be abused with the 18-byte attack is quite small, the pool of devices exposed to the internet that respond to the 29-byte payloads is much bigger. In such a scenario, an attacker with a 100-Mbps connection would be able to send 420,000 requests per second with the 29-byte payload triggering 2,599-byte responses and generating an attack of 8.73 Gbits at an 8,900% amplification rate. “Get 10 nodes, and this can turn into an 87Gbps attack,” the Akamai researchers warned.

Even with valid probes and lower amplification factors, the WSD technique still poses a serious threat, since Akamai identified 802,115 devices on the internet that respond back to WSD probes with a 193% median amplification factor. Many of the devices are CCTV cameras and digital video recorders.

Mitigation for the WSD technique

Organizations can block UDP source port 3702 in their gateway devices and firewalls to prevent unsolicited WSD traffic from reaching their servers. However, the traffic can still congest the bandwidth available on their router. So, complete mitigation requires enforcing access control lists (ACLs) to block traffic from known devices with WSD exposed. DDoS mitigation providers are likely to maintain such lists, just like they do for devices with vulnerable DNS, NTP, SNMP, UPnP and other services that can be abused for DDoS reflection and amplification.

“WSD suffers from the same problem we’ve seen time and time again,” the Akamai researchers said. “WSD was designed and intended to be a LAN-scoped technology. It was never meant to live on the internet. As manufacturers pushed out hardware with this service (improperly) implemented, and users deployed this hardware across the Internet, they’ve inadvertently introduced a new DDoS reflection vector that has already begun to see abuse.”

“The only thing we can do now is wait for devices that are meant to have a 10- to 15-year life to die out and hope that they are replaced with more secured versions,” they said.

How to detect and halt credential theft via Windows WDigest

Source: CSO Magazine On:

Read On

Once attackers get into a system, they often want to elevate privileges or do credential harvesting. One way they do this is by finding a WDigest legacy authentication protocol left forgotten and open on servers. On Windows Server prior to Server 2012 R2, WDigest credential caching is enabled by default. When it is enabled, Lsass.exe retains a copy of the user’s plaintext password in memory, where it can be at risk of theft. Microsoft recommends disabling WDigest authentication unless it is needed.

Setting the UseLogonCredential value to 0 tells WDigest not to store credentials in memory. This value is not by default set up on a Server 2008 R2 system. To add it, scroll down to HKEY local machine to the value noted, right-click on “New,” then “Add a Dword 32-bit value,” and add the UseLogonCredential.

Adding this registry key clears passwords from memory:


Add a REG_DWORD of 0