Jen Easterly recently made a very important speech at Carnegie Mellon University. Jen is the Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA). She is well-liked and known in the technology industry generally and has worked hard to address many of the difficult issues facing the United States concerning cybersecurity. In her comments, she tackles the thorny issue of the quality of software code that is produced—both private and open-source code. The creation of increased liability for software manufacturers may be a game-changer. I’ve pasted her entire speech below because it is that good and worth reading in full. Here are her comments:
Unsafe at Any CPU Speed:
The Designed-in Dangers of Technology and What We Can Do
About It
Good morning. Thank you to President Jahanian for that warm
introduction and to everyone for joining me today on this Monday morning. It’s
wonderful to start the week off with this incredible community.
I can’t think of a more fitting location for this discussion
than Pittsburgh, a city built on innovation, imagination, and technological
transformation; and Carnegie Mellon University, one of the world’s most
renowned educational institutions, home to one of our nation’s top
undergraduate computer science programs and top engineering programs, but also,
to so much more. Let me share a few of my own favorites:
- The first smile in an email was created by research
Professor Scott Fahlman, which launched the emoticon craze
- CAPTCHAs—or completely automated public Turing tests
to tell computers and humans apart— (how many of you knew what that stood
for?) were developed here by Professor Luis von Ahn and his colleagues,
used to help prevent cybercrime
- Wireless research conducted at CMU laid the
foundation for now ubiquitous wi-fi
- CMU is home to the nation’s first robotics lab; and
of course, home to the Software Engineering Institute, the first Federal
Lab dedicated to software engineering. SEI established the first Computer
Emergency Response Team, or CERT, in response to the Morris worm—that
became the model for CERTs around the globe, and of course was a key
partner in the creation of US-CERT in 2003, the precursor to CISA’s
Cybersecurity Division.
But the partnership between CMU and CISA goes well beyond
technical capability – to what I consider the most important aspect of
technology – People. The CISA team is full of amazing CMU alumni like Karen
Miller who leads our vulnerability evaluation work and Dr. Jono Spring, who is
on the front lines of our vulnerability management work – both are here with me
today.
Finally, I wanted to come here because CISA and CMU share a
common set of values—collaboration, innovation, inclusion, empathy, impact, and
service. And of course, a shared passion for our work.
So, now that you know why I am here, I want to start with a
story.
At 2:39 pm on a chilly but sunny Saturday, just six miles off
the coast of South Carolina, an F-22 fighter jet from Langley Air Force Base
fired a Sidewinder air-to-air missile to take down a balloon—the size of three
school buses—that had drifted across the United States. The deliberate action
came after a tense public standoff with Beijing and intense media scrutiny
about the Chinese “spy balloon.”
The response and surrounding attention to the issue,
reinforced for me a major challenge we face in the field of
cybersecurity—raising national attention to issues much less visible but in
many ways far more dangerous. Our country is subject to cyber intrusions every
day from the Chinese government, but these intrusions rarely make it into
national news. Yet these intrusions can do real damage to our nation—leading to
theft of our intellectual property and personal information; and even more
nefariously: establishing a foothold for disrupting or destroying the cyber and
physical infrastructure that Americans rely upon every hour of every day—for
our power, our water, our transportation, our communication, our healthcare,
and so much more. China’s massive and sophisticated hacking program is larger
than that of every other major nation – combined. This is hacking
on an enormous scale, but unlike the spy balloon, which was identified and
dealt with, these threats more often than not go unidentified and undeterred.
And while a focus on adversary nations—like China and
Russia—and on cybercriminals is important, I would submit to you that these
cyber-intrusions are a symptom, rather than a cause, of the vulnerability we
face as a nation. The cause, simply put, is unsafe technology products.
And because the damage caused by these unsafe products is distributed and
spread over time, the impact is much more difficult to measure. But like the
balloon, it’s there.
It’s a school district shut down; one patient forced to
divert to another hospital, a separate patient forced to cancel a surgery; a
family defrauded of their savings; a gas pipeline shutdown; a 160-year-old
college forced to close its doors because of a ransomware attack.
And that’s just the tip of the iceberg, as many—if not
most—attacks go unreported. As a result, it’s enormously difficult to
understand the collective toll these attacks are taking on our nation or to
fully measure their impact in a tangible way.
The risk introduced to all of us by unsafe technology is
frankly much more dangerous and pervasive than the spy balloon, yet we’ve
somehow allowed ourselves to accept it. As we’ve integrated technology into
nearly every facet of our lives, we’ve unwittingly come to accept as normal
that such technology is dangerous-by-design:
We’ve normalized the fact that technology products are
released to market with dozens, hundreds, or thousands of defects, when such
poor construction would be unacceptable in any other critical field.
We’ve normalized the fact that the cybersecurity burden is
placed disproportionately on the shoulders of consumers and small
organizations, who are often least aware of the threat and least capable of
protecting themselves.
We’ve normalized the fact that security is relegated to the
“IT people” in smaller organizations or to a Chief Information Security Officer
in enterprises, but few have the resources, influence, or accountability to
incentivize adoption of products in which safety is appropriately prioritized
against cost, speed to market, and features.
And we’ve normalized the fact that most intrusions and cyber
threats are never reported to the government or shared with potentially
targeted organizations, allowing our adversaries to re-use the same techniques
to compromise countless other organizations, often using the same
infrastructure.
This pattern of ignoring increasingly severe problems is an
example of the “normalization of deviance,” a theory advanced by sociologist Diane
Vaughan in her book about the ill-fated decision to launch the space shuttle
Challenger in 1986. Vaughan describes an environment in which “people
become so accustomed to a deviant behavior that they don't consider it as
deviant, despite the fact that they far exceed their own rules for elementary
safety.”
When it comes to unsafe technology, we have collectively
become accustomed to a deviance from what we would all think would be proper
behavior of technology manufacturers, namely, to create safe products. Dr.
Richard Cook, a software engineer and system safety researcher popularized the
complementary idea of an “accident boundary”—that is, the point of maximum risk
that organizations can tolerate beyond which you have an “accident,” like an
intrusion. Organizations try to move their operations away from the accident
boundary. In cybersecurity, we might see them conduct employee awareness
training for phishing, deploy multi-factor authentication, or buy expensive
security tools. But what if the very design of technology products caused our
operations to always be right up against the accident boundary through no fault
of our own? What if no reasonable amount of money, or employee training could
fix that, and an accident was inevitable because of the design of the product?
It’s as if we’ve normalized the deviant behavior of operating at the bleeding
edge of the accident boundary. This is the current state of the technology
industry—and we need to make a fundamental shift if we want to do better. And
we must do better. So, the question is: How? What if we changed how we think
about cyber-attacks and where to focus our attention? What if we thought more
about not just a superficial “root cause,” but the multiple contributing
factors to a breach? Fortunately, history proves to us that we can—and indeed
must—change the way we collectively value safety over other market incentives
like cost, features, and speed to market. For the first half of the 20th century,
conventional wisdom held that car accidents were solely the fault of bad
drivers. This is very similar to the way we often blame a company today that
has a security breach because they did not patch a known vulnerability. But,
what about the manufacturer that produced the technology that required so many
patches in the first place? We seem to be misplacing the responsibility for
security and compounding it with a lack of accountability. Today, we can be
confident that any car we drive has been manufactured with an array of standard
safety features—seatbelts, airbags, anti-lock brakes, and so on. And that’s
because we know they work—quite simply, these features prevent bad things from
happening. They save lives. Indeed, cars today are designed to
be as safe as possible—for example, to absorb kinetic energy by crumpling and
thus raise the occupants' chances of survival. Cars undergo rigorous testing
and crashworthiness analysis to validate these design elements. No one would
think of purchasing a car today that did not have seatbelts or airbags included
as a standard feature, nor would anyone accept paying extra to have these basic
security elements installed.
Unfortunately, the same cannot be said for the technology
that underpins our very way of life. We find ourselves blaming the user for
unsafe technology. In place of building in effective security from the start,
technology manufacturers are using us, the users, as their crash test
dummies—and we’re feeling the effects of those crashes every day with
real-world consequences. This situation is not sustainable. We need a new
model.
A model in which we can place implicit trust in the safety
and integrity of the technology products that we use every hour of every day,
technology which underpins our most critical functions and services.
A model in which responsibility for technology safety is
shared based upon an organization’s ability to bear the burden and where
problems are fixed at the earliest possible stage—that is, when the technology
is designed rather than when it is being used.
A model that emphasizes collaboration as a prerequisite to
self-preservation and a recognition that a cyber threat to one organization is
a safety threat to all organizations.
In sum, we need a model of sustainable cybersecurity,
one where incentives are realigned to favor long-term investments in the safety
and resilience of our technology ecosystem, and where responsibility for
defending that ecosystem is rebalanced to favor those most capable and best
positioned to do so.
What would such a model look like?
It would begin with technology products that put the safety
of customers first. It would rebalance security risk from organizations—like
small businesses—least able to bear it and onto organizations—like
major technology manufacturers—much more suited to managing cyber risks.
To help crystalize this model, at CISA, we’re working to lay
out a set of core principles for technology manufacturers to build product
safety into their processes to design, implement, configure, ship, and maintain
their products. Let me highlight three of them here:
First, the burden of safety should never fall solely
upon the customer. Technology manufacturers must take ownership of the security
outcomes for their customers.
Second, technology manufacturers should embrace
radical transparency to disclose and ultimately help us better understand the
scope of our consumer safety challenges, as well as a commitment to
accountability for the products they bring to market.
Third, the leaders of technology manufacturers should
explicitly focus on building safe products, publishing a roadmap that lays out
the company's plan for how products will be developed and updated to be both
secure-by-design and secure-by-default.
So, what would this look like in practice?
Well, consumer safety must be front and center in all phases
of the technology product lifecycle—with security designed in from the
beginning—and strong safety features, like seatbelts and airbags— enabled right
out of the box, without added costs. Security-by-design includes actions like
transitioning to memory-safe languages, having a transparent vulnerability
disclosure policy, and secure coding practices. Attributes of strong
security-by-default will evolve over time, but in today’s risk environment
sellers of software must include in their basic pricing the types of features
that secure a user’s identity, gather and log evidence of potential intrusions,
and control access to sensitive information, rather than as an added, more
expensive option.
In short, strong security should be a standard feature of
virtually every technology product, and especially those that support the
critical infrastructure that Americans rely on daily. Technology must be
purposefully developed, built, and tested to significantly reduce the number of
exploitable flaws before they are introduced into the market for broad use.
Achieving this outcome will require a significant shift in how technology is
produced, including the code used to develop software, but ultimately, such a
transition to secure-by-default and secure-by-design products will help both
organizations and technology providers: it will mean less time fixing problems,
more time focusing on innovation and growth, and importantly, it will make life
much harder for our adversaries.
In this new model, the government has an important role to
play in both incentivizing these outcomes and operationalizing these
principals. Regulation—which played a significant role in improving the safety
of automobiles—is one tool, but—importantly—it’s not a panacea.
One of the most effective tools the government has at its
disposal to drive better security outcomes is through its purchasing power. The
Biden Administration has already taken important steps toward this goal in
establishing software security requirements for federal contractors and
undertaking an effort to adopt security labels for connected consumer devices
like baby monitors and webcams. It will continue to pursue this goal through
the implementation of the initiatives called for in the President’s May 2021
cybersecurity executive order, such as developing federal acquisition
regulations around cybersecurity.
The government can also play a role in shifting liability
onto those entities that fail to live up to the duty of care they owe their
customers. Returning to the automotive analogy: the liability for defective
auto parts now generally rests with the producer that introduced the defect
even if an error by the driver caused the defect to manifest. This was
reflected in class action litigation against the Takata Corporation, where the
company’s defective airbags tragically caused over 30 deaths after often minor
collisions. Consumers and businesses alike expect that products purchased from
a reputable provider will work the way they are supposed to and not introduce
inordinate risk. To this end, government can work to advance legislation to
prevent technology manufacturers from disclaiming liability by contract,
establishing higher standards of care for software in specific critical
infrastructure entities, and driving the development of a safe harbor framework
to shield from liability companies that securely develop and maintain their
software products and services. While it will not be possible to prevent all
software vulnerabilities, the fact that we’ve accepted a monthly “Patch
Tuesday” as normal is further evidence of our willingness to operate
dangerously at the accident boundary.
In addition, the government can play a useful signaling role
in acknowledging the good work that technology manufacturers are doing today
because they recognize that owning the security outcomes of their customers is
the right thing to do to ensure the safety of those customers.
Encouragingly, an increasing number are taking important
steps in the right direction—
from adopting secure programming practices
to enabling strong security measures by default for their
customers. I’ll highlight a few.
With respect to secure programming, it’s been a relatively
well-kept secret for many years, but around two-thirds of known software
vulnerabilities are a class of weakness referred to as “memory safety”
vulnerabilities which introduce certain types of bugs related to how computer
memory is accessed. Certain programming languages—most notably, C and C++— lack
the mechanisms to prevent coders from introducing these vulnerabilities into
their software. By switching to memory safe programming languages—like Rust,
Go, Python, and Java—these vulnerabilities can be eliminated. Java, of course,
was invented by CMU alumnus James Gosling. As one example, Google recently
announced that “Android 13 is the first Android release where a majority of new
code added to the release is in a memory safe language” – specifically Rust –
and that “there have been zero memory safety vulnerabilities discovered in
Android’s Rust code.” That’s a remarkable result.
And it’s not just Google. Mozilla, who created Rust, has a
project to integrate Rust into Firefox. Amazon Web Services has also begun
building critical services in Rust—noting not just security benefits but also
time and cost savings.
The nonprofit Internet Security Research Group is another
good example. Work done under their Prossimo project led to support for using
Rust in the Linux kernel, an important milestone given that the Linux kernel is
at the heart of today’s internet. If the Internet Security Research Group can
have such success on a limited budget, think about what big corporations can
do.
Now consider some examples of security defaults: Apple says
that 95% of iCloud users enable MFA. Metrics for other services are hard to
come by, but Twitter reports that fewer than 3% of its users use any form of
MFA. Microsoft reports that only about a quarter of its enterprise customers
use MFA and that only about one third of their administrator accounts use MFA.
While the Twitter and Microsoft stats are disappointing, the companies are
doing a service by helpfully releasing data on MFA adoption publicly.
Apple’s impressive MFA numbers aren’t due to random chance.
By making MFA the default for user accounts, Apple is taking ownership for the
security outcomes of their users. By providing radical transparency around MFA
adoption, these organizations are helping shine a light on the necessity of
security by default. More should follow their lead—in fact, every organization
should demand transparency regarding the practices and controls adopted by
technology providers and then demand adoption of such practices as basic
criteria for acceptability before procurement or use. Manufacturers must be
transparent about their processes and their quality and safety. They must run
transparent vulnerability disclosure policies, giving legal protection to
security researchers who report vulnerabilities, letting those researchers talk
publicly about their findings, and taking care to address root causes of those
vulnerabilities.
Here at CMU, the Software Engineering Institute has done some
great work on this, including by publishing the CERT Guide to Coordinated
Vulnerability Disclosure. Other community efforts like disclose.io have done a
good job laying out template language for vulnerability disclosure policies
which companies can adopt.
Dropbox is one strong example of mandating transparency from
vendors. In 2019, they overhauled their vendor contracts to include security
requirements, holding vendors to the same level of security that Dropbox holds
itself to. This includes actions like requiring vendors and their employees to
use MFA, allowing Dropbox to perform security testing of the vendors’ systems,
and requiring vendors to publish vulnerability disclosure policies with legal
safe harbor. They even open-sourced their contract requirements so that other
organizations could adopt and modify them. I encourage other organizations to
follow Dropbox’s example and start demanding transparency from their vendors.
At CISA, we’ve been working through ways that we can support radical
transparency in technology software in products. For example, we’re focused on
advancing the use of Software Bill of Materials, or “SBOMs,” the idea that
software should come with an inventory of open-source components and other code
dependencies. Effective use of an SBOM can help an organization understand
whether a given vulnerability affects software being used in their assets and
provide greater confidence in a manufacturer’s software development
practices. We must applaud and encourage any, and all progress, while
also recognizing the need to do more. Because as we introduce more unsafe
technology to our lives, we increase our risk and our exposure
exponentially—and this threat environment will only get more complex.
While we play our role from a government perspective, and technology companies
increasingly embrace their role in putting consumer safety first, universities
have an important role to play in achieving safe technology products. Indeed,
one of the main reasons I wanted to come to CMU is because of the strength of
your computer science and software engineering programs—because this is where
the next generation of software engineers and innovators are learning their
craft. For the professors here this morning, you are responsible for the
education of some of our nation’s brightest young minds and for the knowledge
they bring into the working world. If that world is going to be one where the
technology products that we all rely on are safe, it must be a world where our
new graduates show up to work with fluency in, and a bias towards, memory safe
programming languages. A world where incentives, tools, and training are
readily available to help organizations migrate key libraries to memory safe
languages. Imagine that by 2030, memory safety vulnerabilities are almost
non-existent. Attackers are unable to find and use memory safety
vulnerabilities, dramatically raising the cost of an attack, and stopping all
the terrible things I talked about earlier? How did we get there? I think a
major part of the answer to that question is that “we figured out how to make
memory safe languages ubiquitous within universities nationally, and globally.”
I know that sounds like a lofty goal but let’s talk about some possible steps
to get there. I’ll highlight four key areas for your consideration.
First, could you move university coursework to memory safe
languages?
- As an industry, we need to start containing, and
eventually, rolling back the prevalence of C/C++ in key systems and
putting a real emphasis on safety.
- How can we tackle this challenge? What if we start a
formal program – with material funding, incentives for professors, goals,
an executive sponsor, and metrics – to migrate course materials to use
memory safe languages? This includes ensuring that C and C++, when taught,
are treated as dangerous, regardless of how pervasive they are in existing
codebases.
- In that vein, I’d like to give kudos here to CMU for
offering CS 112— an introductory programming course taught in Python taken
by many students across the university. Introducing students to the
benefits of programming in a memory safe language is a key step forward.
Second, could you weave security through all computer
software coursework?
- There’s often a knowledge, skills, and experience gap
between new hires and what is needed at their first jobs. Some of the
larger companies have security training for new hires to ensure they
understand how to code safely, always with an intelligent adversary in
mind. Meanwhile, just one out of the top twenty undergraduate programs in
computer science requires a security course as a graduation requirement.
Which one? UC San Diego. As it stands, at most schools, a student can earn
a computer science degree without learning the fundamentals of safety and
security. I urge every university to make taking a security course a
graduation requirement for all computer science students. Better still,
don’t just make security a separate class, but make it part of every
class.
- I’d like to recognize CMU for being a leader here,
integrating security for its core classes. Freshmen taking CS 122, for
instance, learn about memory safety bugs like buffer overflows. I’d love
to see how we can help standardize this kind of education into curricula
across the country.
- Civil, mechanical, and electrical engineers all take
a substantial course load around thinking critically about safety: from
understanding tolerances and safety margins to rigorously analyzing
failures, safety is a critical part of engineering education. Skills for
reliably and securely engineering computer software are critical parts of
national security. We must work together to instill these skills into the
engineers who will manufacture our future technology.
- CMU also deserves credit for its focus on software as
an engineering discipline. CMU researchers have made significant
contributions to advancing the state of the art in software engineering
and programming language design. I challenge you to think about how to go
further in making that work accessible to all students and integrating it
deeper into the standard computer science curriculum.
Third, how can you help the open-source community?
- Are there opportunities to migrate CMU sponsored
open-source projects to memory safe languages? To require all published
research code to be written in memory safe languages? To build research
opportunities and hands-on classroom learning around enhancing the safety
of key open-source projects? The open-source commons is a key foundation
of our software ecosystem and universities are well suited to invest in making
sure that foundation is up to code.
And finally, could you find a way to help all developers
and all business leaders make the switch?
- Can we create better tooling for migrating to memory
safe code from legacy code bases? Are there ways to make formal
verification of software safety easy to deploy at scale? These questions
have drawn research attention for decades, but they are only growing in
importance as software is further embedded into the very foundations of
our society. More tactically, you can help produce clear technical
guidance—in partnership with CISA—on how developers can radically improve
the quality and safety of their code.
- You can partner also with your colleagues here in the
business school on management guidance to help business leaders understand
what it takes to reinforce a culture of embracing safety and security as a
matter of product quality.
These are big challenges, but ones that deserve our full
attention. Steps taken today at this university and universities around the
country can help spur an industry-wide change towards memory safe languages and
add more engineering rigor to software development which in turn, will help
protect all technology users. It’s critical that students have a strong bias to
build safety into every system, which will pay dividends in the long run.
Finally, to all the students in the room.
Given the catastrophic costs of cyber-attacks affecting
American businesses, governments, and citizens, we need future leaders like you
to find ways to turbocharge the transformation to memory safe systems, and more
broadly to systems that we know to be secure by design.
There are many ways you can help solve these challenges.
Maybe you go work at a tech company—or even start your own—and write memory
safe code, focused on advancing the principles of security we discussed today.
Remember that you should take pride in the safety of the code you write—think of
it as “part of your brand” as an excellent software engineer. And maybe
you take what you learn today and help educate your fellow students on security
and encourage your peers to write memory safe code.
Or maybe you decide to come work with us at CISA. We have
several CMU alums that did just that. We need talented individuals like you all
to help build our team as we continue to increase our capabilities, and most
importantly, to help us forge a new approach around technology product safety.
My team is here today, and I’d encourage you to stop by their table to talk
with them if you want to learn more about working at CISA. One common value we
share with you all is that we all put our heart into our work.
As we started with a story, I want to end with one, though
this is less a story than a tale—a cautionary one at that.
Imagine a world where none of the things we talked about
today come to pass, where the burden of security continues to be placed on
consumers, where technology manufacturers continue to create unsafe products or
upsell security as a costly add-on feature, where universities continue to
teach unsafe coding practices, where the services we rely on every day remain
vulnerable. This is a world that our adversaries are watching carefully and hoping
never changes.
Because this is a world where another unprovoked invasion of
a peaceful country by another much more powerful adversary—an adversary that
has watched and learned from the endless missteps of Russia in its criminal war
against Ukraine—might very well be coupled with the explosion of multiple U.S.
gas pipelines; the mass pollution of our water systems; the hijacking of our
telecommunications systems; the crippling of our transportation nodes—all
designed to incite chaos and panic across our country and deter our ability to
marshal military might and citizen will.
Such a scenario of attacks against our critical
infrastructure in the event of a Chinese invasion of Taiwan is unfortunately
not terribly far-fetched, but it is one we can prevent, if we come together,
collectively as a nation, across our businesses and across our universities, to
put our heart into the hard work of achieving safe, secure, and resilient
infrastructure for the American people.
Thank you again for the opportunity to speak with you today;
I look forward to continuing the conversation with Professor Mayer and hearing
your thoughts.