Thursday, 23 March 2023

Where AI Works Well and Where it Doesn’t in Predicting Standard Essentiality for Patents

Artificial Intelligence (AI) is providing enormous productivity and increased value in many applications. Introduction of the chatbot ChatGPT has taken the interpretation of text to a much higher level. ChatGPT can understand complex instructions and provide sophisticated responses, such as essays good enough to pass university exams. The “digital twin” AI predictions of an aircraft in flight based on physics equations and mathematical models can be continuously recalibrated with accurate measurements of position, altitude, velocity, acceleration, temperature and airframe strain. 

But AI is no panacea and is not yet sufficiently well developed to be precise or dependable everywhere. For example, much better AI training data is required to reliably estimate patent essentiality to standards such as 4G and 5G, where AI is being advocated by various experts and has already been adopted by one patent pool. AI training data needs to include many accurate determinations, including of patents found essential and patents found not essential. There is no such data set.

There is also a lot of room for improvement in AI inferencing. Essentiality determination is subjective. Even competent human experts doing a thorough job often disagree about their determinations on the same patents. Technical and legal interpretations of language may differ, as does the meaning of words in different contexts, or over the years as definitions and use of language changes.

My full article on this topic was published as a guest contribution to IP Watchdog.

Monday, 20 March 2023

California to Have Insulin Manufactured and Sold at Lower Price

California Governor Gavin Newsom is following through on promises to attack the high cost of drugs.  The press release states:

Governor Gavin Newsom, as part of his tour of the State of California, announced that CalRx has secured a contract with a manufacturer (CIVICA), to make $30 insulin available to all who need it. The Governor also announced today that California will seek to manufacture its own Naloxone.

Today’s announcement makes good on Governor Newsom’s promise on his first day in office, to bring down the price of prescription drugs for Californians and increase accountability and transparency in health care. Californians can learn more about CalRX on the newly launched website.

WHAT GOVERNOR NEWSOM SAID: “People should not be forced to go into debt to get life saving prescriptions. Through CalRx, Californians will have access to some of the most inexpensive insulin available, helping them save thousands each year. But we’re not stopping there – California will seek to make our own Naloxone as part of our plan to fight the fentanyl crisis.”

WHY THIS MATTERS: Today’s announcement will bring down the price of insulin by about 90%, saving cash-paying patients between $2,000 and $4,000 annually. With CalRx, and unlike private companies, we’re getting at the underlying cost – the price is the price, and CalRx will prevent the egregious cost-shifting that happens in traditional pharmaceutical price games. It’ll cost us $30 to manufacture and distribute, and that’s how much the consumer can buy it for. You don’t need a voucher or coupon to access this price, and it’s available to everybody regardless of insurance plan. This is a crucial step in not just cutting the cost for the consumer, but cutting costs across the board in order to bring cheaper prescription drugs to all Californians.

“To address the affordability crisis in California, we have to address the high cost of prescription drugs,” said Dr. Mark Ghaly, Secretary of the California Health and Human Services Agency. “The CalRx Biosimilar Insulin Initiative will benefit Californians who are today paying too much for a medication that we know is life saving and life altering.”

KEY DETAILS

  • A 10mL vial will be made available for no more than $30 (normally $300)
  • A box of 5 pre-filled 3mL pens will be made available for no more than $55 (normally more than $500)
  • No new prescription will be needed. Californians will be able to ask for the CalRx generic at their local pharmacy or via mail order pharmacies. Pharmacies must agree to order/stock the product.
  • CalRx plans to make biosimilar insulins available for: Glargine, Aspart, and Lispro (expected to be interchangeable with Lantus, Humalog, and Novolog respectively)

WHAT COMES NEXT

·       As part of the State’s Master Plan to Tackle the Fentanyl Crisis, California is exploring potential next products to bring to market, like Naloxone, to aid in the State’s effort to combat fentanyl overdoses.

·       CIVICA is working with the California Health and Human Services Agency to identify a California-based manufacturing facility.

The CalRX website states, in part:

CalRx represents a groundbreaking solution for addressing drug affordability. Originally announced in January 2019 in Governor Newsom’s first Executive Order(this is a pdf file) and later signed into law in the California Affordable Drug Manufacturing Act of 2020 (Pan, SB 852, Chapter 207, Statutes of 2020), CalRx empowers the State of California to develop, produce, and distribute generic drugs and sell them at low cost.

The State will target prescription drugs where the pharmaceutical market has failed to lower drug costs, even when a generic or biosimilar medication is available. 

The first drug manufactured will be insulin. Once approved by the FDA, Californians can ask their doctor or pharmacist for a CalRx biosimilar insulin.

The CalRx Biosimilar Insulin Initiative will lay the groundwork for future drug projects.

Bringing CalRx products into the drug market will create more competition, which will help shift the industry from obscure, rebate-based pricing towards low, transparent pricing.

  • CalRx will use transparent pricing – and set as low as possible – based on the development, production, and distribution costs.
  • CalRx will develop target drugs in collaboration with the State’s public programs.
  • CalRx will be available for doctors to prescribe and will be available through a variety of outlets, such as a local pharmacy or retail outlet.
  • CalRx is not a coupon program. As mandated by law, CalRx will only use federally mandated rebates or discounts, not other ones.

Friday, 10 March 2023

Silicon Valley Bank Fails

Silicon Valley Bank in California has taken a turn for the worst.  Not good.  The Federal Deposit Insurance Company [FDIC] is taking over the bank.  The FDIC press release states:

WASHINGTON – Silicon Valley Bank, Santa Clara, California, was closed today by the California Department of Financial Protection and Innovation, which appointed the Federal Deposit Insurance Corporation (FDIC) as receiver. To protect insured depositors, the FDIC created the Deposit Insurance National Bank of Santa Clara (DINB). At the time of closing, the FDIC as receiver immediately transferred to the DINB all insured deposits of Silicon Valley Bank.

All insured depositors will have full access to their insured deposits no later than Monday morning, March 13, 2023. The FDIC will pay uninsured depositors an advance dividend within the next week. Uninsured depositors will receive a receivership certificate for the remaining amount of their uninsured funds. As the FDIC sells the assets of Silicon Valley Bank, future dividend payments may be made to uninsured depositors.

Silicon Valley Bank had 17 branches in California and Massachusetts. The main office and all branches of Silicon Valley Bank will reopen on Monday, March 13, 2023. The DINB will maintain Silicon Valley Bank’s normal business hours. Banking activities will resume no later than Monday, March 13, including on-line banking and other services. Silicon Valley Bank’s official checks will continue to clear. Under the Federal Deposit Insurance Act, the FDIC may create a DINB to ensure that customers have continued access to their insured funds.

As of December 31, 2022, Silicon Valley Bank had approximately $209.0 billion in total assets and about $175.4 billion in total deposits. At the time of closing, the amount of deposits in excess of the insurance limits was undetermined. The amount of uninsured deposits will be determined once the FDIC obtains additional information from the bank and customers.

Customers with accounts in excess of $250,000 should contact the FDIC toll–free at 1-866-799-0959.

The FDIC as receiver will retain all the assets from Silicon Valley Bank for later disposition. Loan customers should continue to make their payments as usual.

Silicon Valley Bank is the first FDIC–insured institution to fail this year. The last FDIC–insured institution to close was Almena State Bank, Almena, Kansas, on October 23, 2020.

Tuesday, 28 February 2023

Liability for Cybersecurity Issues with Software Code -- Director Easterly

Jen Easterly recently made a very important speech at Carnegie Mellon University.  Jen is the Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA).  She is well-liked and known in the technology industry generally and has worked hard to address many of the difficult issues facing the United States concerning cybersecurity.  In her comments, she tackles the thorny issue of the quality of software code that is produced—both private and open-source code.  The creation of increased liability for software manufacturers may be a game-changer.  I’ve pasted her entire speech below because it is that good and worth reading in full.  Here are her comments:

Unsafe at Any CPU Speed:

The Designed-in Dangers of Technology and What We Can Do About It

Good morning. Thank you to President Jahanian for that warm introduction and to everyone for joining me today on this Monday morning. It’s wonderful to start the week off with this incredible community.

I can’t think of a more fitting location for this discussion than Pittsburgh, a city built on innovation, imagination, and technological transformation; and Carnegie Mellon University, one of the world’s most renowned educational institutions, home to one of our nation’s top undergraduate computer science programs and top engineering programs, but also, to so much more. Let me share a few of my own favorites:

  • The first smile in an email was created by research Professor Scott Fahlman, which launched the emoticon craze
  • CAPTCHAs—or completely automated public Turing tests to tell computers and humans apart— (how many of you knew what that stood for?) were developed here by Professor Luis von Ahn and his colleagues, used to help prevent cybercrime
  • Wireless research conducted at CMU laid the foundation for now ubiquitous wi-fi
  • CMU is home to the nation’s first robotics lab; and of course, home to the Software Engineering Institute, the first Federal Lab dedicated to software engineering. SEI established the first Computer Emergency Response Team, or CERT, in response to the Morris worm—that became the model for CERTs around the globe, and of course was a key partner in the creation of US-CERT in 2003, the precursor to CISA’s Cybersecurity Division.

But the partnership between CMU and CISA goes well beyond technical capability – to what I consider the most important aspect of technology – People. The CISA team is full of amazing CMU alumni like Karen Miller who leads our vulnerability evaluation work and Dr. Jono Spring, who is on the front lines of our vulnerability management work – both are here with me today.

Finally, I wanted to come here because CISA and CMU share a common set of values—collaboration, innovation, inclusion, empathy, impact, and service. And of course, a shared passion for our work.

So, now that you know why I am here, I want to start with a story.

At 2:39 pm on a chilly but sunny Saturday, just six miles off the coast of South Carolina, an F-22 fighter jet from Langley Air Force Base fired a Sidewinder air-to-air missile to take down a balloon—the size of three school buses—that had drifted across the United States. The deliberate action came after a tense public standoff with Beijing and intense media scrutiny about the Chinese “spy balloon.”

The response and surrounding attention to the issue, reinforced for me a major challenge we face in the field of cybersecurity—raising national attention to issues much less visible but in many ways far more dangerous. Our country is subject to cyber intrusions every day from the Chinese government, but these intrusions rarely make it into national news. Yet these intrusions can do real damage to our nation—leading to theft of our intellectual property and personal information; and even more nefariously: establishing a foothold for disrupting or destroying the cyber and physical infrastructure that Americans rely upon every hour of every day—for our power, our water, our transportation, our communication, our healthcare, and so much more. China’s massive and sophisticated hacking program is larger than that of every other major nation – combined. This is hacking on an enormous scale, but unlike the spy balloon, which was identified and dealt with, these threats more often than not go unidentified and undeterred.

And while a focus on adversary nations—like China and Russia—and on cybercriminals is important, I would submit to you that these cyber-intrusions are a symptom, rather than a cause, of the vulnerability we face as a nation. The cause, simply put, is unsafe technology products. And because the damage caused by these unsafe products is distributed and spread over time, the impact is much more difficult to measure. But like the balloon, it’s there.

It’s a school district shut down; one patient forced to divert to another hospital, a separate patient forced to cancel a surgery; a family defrauded of their savings; a gas pipeline shutdown; a 160-year-old college forced to close its doors because of a ransomware attack.

And that’s just the tip of the iceberg, as many—if not most—attacks go unreported. As a result, it’s enormously difficult to understand the collective toll these attacks are taking on our nation or to fully measure their impact in a tangible way.

The risk introduced to all of us by unsafe technology is frankly much more dangerous and pervasive than the spy balloon, yet we’ve somehow allowed ourselves to accept it. As we’ve integrated technology into nearly every facet of our lives, we’ve unwittingly come to accept as normal that such technology is dangerous-by-design:

We’ve normalized the fact that technology products are released to market with dozens, hundreds, or thousands of defects, when such poor construction would be unacceptable in any other critical field. 

We’ve normalized the fact that the cybersecurity burden is placed disproportionately on the shoulders of consumers and small organizations, who are often least aware of the threat and least capable of protecting themselves. 

We’ve normalized the fact that security is relegated to the “IT people” in smaller organizations or to a Chief Information Security Officer in enterprises, but few have the resources, influence, or accountability to incentivize adoption of products in which safety is appropriately prioritized against cost, speed to market, and features. 

And we’ve normalized the fact that most intrusions and cyber threats are never reported to the government or shared with potentially targeted organizations, allowing our adversaries to re-use the same techniques to compromise countless other organizations, often using the same infrastructure.

This pattern of ignoring increasingly severe problems is an example of the “normalization of deviance,” a theory advanced by sociologist Diane Vaughan in her book about the ill-fated decision to launch the space shuttle Challenger in 1986.  Vaughan describes an environment in which “people become so accustomed to a deviant behavior that they don't consider it as deviant, despite the fact that they far exceed their own rules for elementary safety.”

When it comes to unsafe technology, we have collectively become accustomed to a deviance from what we would all think would be proper behavior of technology manufacturers, namely, to create safe products. Dr. Richard Cook, a software engineer and system safety researcher popularized the complementary idea of an “accident boundary”—that is, the point of maximum risk that organizations can tolerate beyond which you have an “accident,” like an intrusion. Organizations try to move their operations away from the accident boundary. In cybersecurity, we might see them conduct employee awareness training for phishing, deploy multi-factor authentication, or buy expensive security tools. But what if the very design of technology products caused our operations to always be right up against the accident boundary through no fault of our own? What if no reasonable amount of money, or employee training could fix that, and an accident was inevitable because of the design of the product? It’s as if we’ve normalized the deviant behavior of operating at the bleeding edge of the accident boundary. This is the current state of the technology industry—and we need to make a fundamental shift if we want to do better. And we must do better. So, the question is: How? What if we changed how we think about cyber-attacks and where to focus our attention? What if we thought more about not just a superficial “root cause,” but the multiple contributing factors to a breach? Fortunately, history proves to us that we can—and indeed must—change the way we collectively value safety over other market incentives like cost, features, and speed to market. For the first half of the 20th century, conventional wisdom held that car accidents were solely the fault of bad drivers. This is very similar to the way we often blame a company today that has a security breach because they did not patch a known vulnerability. But, what about the manufacturer that produced the technology that required so many patches in the first place? We seem to be misplacing the responsibility for security and compounding it with a lack of accountability. Today, we can be confident that any car we drive has been manufactured with an array of standard safety features—seatbelts, airbags, anti-lock brakes, and so on. And that’s because we know they work—quite simply, these features prevent bad things from happening. They save lives. Indeed, cars today are designed to be as safe as possible—for example, to absorb kinetic energy by crumpling and thus raise the occupants' chances of survival. Cars undergo rigorous testing and crashworthiness analysis to validate these design elements. No one would think of purchasing a car today that did not have seatbelts or airbags included as a standard feature, nor would anyone accept paying extra to have these basic security elements installed.

Unfortunately, the same cannot be said for the technology that underpins our very way of life. We find ourselves blaming the user for unsafe technology. In place of building in effective security from the start, technology manufacturers are using us, the users, as their crash test dummies—and we’re feeling the effects of those crashes every day with real-world consequences. This situation is not sustainable. We need a new model. 

A model in which we can place implicit trust in the safety and integrity of the technology products that we use every hour of every day, technology which underpins our most critical functions and services. 

A model in which responsibility for technology safety is shared based upon an organization’s ability to bear the burden and where problems are fixed at the earliest possible stage—that is, when the technology is designed rather than when it is being used. 

A model that emphasizes collaboration as a prerequisite to self-preservation and a recognition that a cyber threat to one organization is a safety threat to all organizations. 

In sum, we need a model of sustainable cybersecurity, one where incentives are realigned to favor long-term investments in the safety and resilience of our technology ecosystem, and where responsibility for defending that ecosystem is rebalanced to favor those most capable and best positioned to do so.

What would such a model look like?

It would begin with technology products that put the safety of customers firstIt would rebalance security risk from organizations—like small businesses—least able to bear it and onto organizations—like major technology manufacturers—much more suited to managing cyber risks.

To help crystalize this model, at CISA, we’re working to lay out a set of core principles for technology manufacturers to build product safety into their processes to design, implement, configure, ship, and maintain their products. Let me highlight three of them here:

First, the burden of safety should never fall solely upon the customer. Technology manufacturers must take ownership of the security outcomes for their customers.

Second, technology manufacturers should embrace radical transparency to disclose and ultimately help us better understand the scope of our consumer safety challenges, as well as a commitment to accountability for the products they bring to market. 

Third, the leaders of technology manufacturers should explicitly focus on building safe products, publishing a roadmap that lays out the company's plan for how products will be developed and updated to be both secure-by-design and secure-by-default.

So, what would this look like in practice?

Well, consumer safety must be front and center in all phases of the technology product lifecycle—with security designed in from the beginning—and strong safety features, like seatbelts and airbags— enabled right out of the box, without added costs. Security-by-design includes actions like transitioning to memory-safe languages, having a transparent vulnerability disclosure policy, and secure coding practices. Attributes of strong security-by-default will evolve over time, but in today’s risk environment sellers of software must include in their basic pricing the types of features that secure a user’s identity, gather and log evidence of potential intrusions, and control access to sensitive information, rather than as an added, more expensive option.

In short, strong security should be a standard feature of virtually every technology product, and especially those that support the critical infrastructure that Americans rely on daily. Technology must be purposefully developed, built, and tested to significantly reduce the number of exploitable flaws before they are introduced into the market for broad use. Achieving this outcome will require a significant shift in how technology is produced, including the code used to develop software, but ultimately, such a transition to secure-by-default and secure-by-design products will help both organizations and technology providers: it will mean less time fixing problems, more time focusing on innovation and growth, and importantly, it will make life much harder for our adversaries.

In this new model, the government has an important role to play in both incentivizing these outcomes and operationalizing these principals. Regulation—which played a significant role in improving the safety of automobiles—is one tool, but—importantly—it’s not a panacea.

One of the most effective tools the government has at its disposal to drive better security outcomes is through its purchasing power. The Biden Administration has already taken important steps toward this goal in establishing software security requirements for federal contractors and undertaking an effort to adopt security labels for connected consumer devices like baby monitors and webcams. It will continue to pursue this goal through the implementation of the initiatives called for in the President’s May 2021 cybersecurity executive order, such as developing federal acquisition regulations around cybersecurity.

The government can also play a role in shifting liability onto those entities that fail to live up to the duty of care they owe their customers. Returning to the automotive analogy: the liability for defective auto parts now generally rests with the producer that introduced the defect even if an error by the driver caused the defect to manifest. This was reflected in class action litigation against the Takata Corporation, where the company’s defective airbags tragically caused over 30 deaths after often minor collisions. Consumers and businesses alike expect that products purchased from a reputable provider will work the way they are supposed to and not introduce inordinate risk. To this end, government can work to advance legislation to prevent technology manufacturers from disclaiming liability by contract, establishing higher standards of care for software in specific critical infrastructure entities, and driving the development of a safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. While it will not be possible to prevent all software vulnerabilities, the fact that we’ve accepted a monthly “Patch Tuesday” as normal is further evidence of our willingness to operate dangerously at the accident boundary.

In addition, the government can play a useful signaling role in acknowledging the good work that technology manufacturers are doing today because they recognize that owning the security outcomes of their customers is the right thing to do to ensure the safety of those customers.

Encouragingly, an increasing number are taking important steps in the right direction—

from adopting secure programming practices to enabling strong security measures by default for their customers. I’ll highlight a few.

With respect to secure programming, it’s been a relatively well-kept secret for many years, but around two-thirds of known software vulnerabilities are a class of weakness referred to as “memory safety” vulnerabilities which introduce certain types of bugs related to how computer memory is accessed. Certain programming languages—most notably, C and C++— lack the mechanisms to prevent coders from introducing these vulnerabilities into their software. By switching to memory safe programming languages—like Rust, Go, Python, and Java—these vulnerabilities can be eliminated. Java, of course, was invented by CMU alumnus James Gosling. As one example, Google recently announced that “Android 13 is the first Android release where a majority of new code added to the release is in a memory safe language” – specifically Rust – and that “there have been zero memory safety vulnerabilities discovered in Android’s Rust code.” That’s a remarkable result.

And it’s not just Google. Mozilla, who created Rust, has a project to integrate Rust into Firefox. Amazon Web Services has also begun building critical services in Rust—noting not just security benefits but also time and cost savings.

The nonprofit Internet Security Research Group is another good example. Work done under their Prossimo project led to support for using Rust in the Linux kernel, an important milestone given that the Linux kernel is at the heart of today’s internet. If the Internet Security Research Group can have such success on a limited budget, think about what big corporations can do.

Now consider some examples of security defaults: Apple says that 95% of iCloud users enable MFA. Metrics for other services are hard to come by, but Twitter reports that fewer than 3% of its users use any form of MFA. Microsoft reports that only about a quarter of its enterprise customers use MFA and that only about one third of their administrator accounts use MFA. While the Twitter and Microsoft stats are disappointing, the companies are doing a service by helpfully releasing data on MFA adoption publicly.

Apple’s impressive MFA numbers aren’t due to random chance. By making MFA the default for user accounts, Apple is taking ownership for the security outcomes of their users. By providing radical transparency around MFA adoption, these organizations are helping shine a light on the necessity of security by default. More should follow their lead—in fact, every organization should demand transparency regarding the practices and controls adopted by technology providers and then demand adoption of such practices as basic criteria for acceptability before procurement or use. Manufacturers must be transparent about their processes and their quality and safety. They must run transparent vulnerability disclosure policies, giving legal protection to security researchers who report vulnerabilities, letting those researchers talk publicly about their findings, and taking care to address root causes of those vulnerabilities.

Here at CMU, the Software Engineering Institute has done some great work on this, including by publishing the CERT Guide to Coordinated Vulnerability Disclosure. Other community efforts like disclose.io have done a good job laying out template language for vulnerability disclosure policies which companies can adopt. 

Dropbox is one strong example of mandating transparency from vendors. In 2019, they overhauled their vendor contracts to include security requirements, holding vendors to the same level of security that Dropbox holds itself to. This includes actions like requiring vendors and their employees to use MFA, allowing Dropbox to perform security testing of the vendors’ systems, and requiring vendors to publish vulnerability disclosure policies with legal safe harbor. They even open-sourced their contract requirements so that other organizations could adopt and modify them. I encourage other organizations to follow Dropbox’s example and start demanding transparency from their vendors. At CISA, we’ve been working through ways that we can support radical transparency in technology software in products. For example, we’re focused on advancing the use of Software Bill of Materials, or “SBOMs,” the idea that software should come with an inventory of open-source components and other code dependencies. Effective use of an SBOM can help an organization understand whether a given vulnerability affects software being used in their assets and provide greater confidence in a manufacturer’s software development practices.  We must applaud and encourage any, and all progress, while also recognizing the need to do more. Because as we introduce more unsafe technology to our lives, we increase our risk and our exposure exponentially—and this threat environment will only get more complex.  While we play our role from a government perspective, and technology companies increasingly embrace their role in putting consumer safety first, universities have an important role to play in achieving safe technology products. Indeed, one of the main reasons I wanted to come to CMU is because of the strength of your computer science and software engineering programs—because this is where the next generation of software engineers and innovators are learning their craft. For the professors here this morning, you are responsible for the education of some of our nation’s brightest young minds and for the knowledge they bring into the working world. If that world is going to be one where the technology products that we all rely on are safe, it must be a world where our new graduates show up to work with fluency in, and a bias towards, memory safe programming languages. A world where incentives, tools, and training are readily available to help organizations migrate key libraries to memory safe languages. Imagine that by 2030, memory safety vulnerabilities are almost non-existent. Attackers are unable to find and use memory safety vulnerabilities, dramatically raising the cost of an attack, and stopping all the terrible things I talked about earlier? How did we get there? I think a major part of the answer to that question is that “we figured out how to make memory safe languages ubiquitous within universities nationally, and globally.” I know that sounds like a lofty goal but let’s talk about some possible steps to get there. I’ll highlight four key areas for your consideration.

First, could you move university coursework to memory safe languages?

  • As an industry, we need to start containing, and eventually, rolling back the prevalence of C/C++ in key systems and putting a real emphasis on safety.
  • How can we tackle this challenge? What if we start a formal program – with material funding, incentives for professors, goals, an executive sponsor, and metrics – to migrate course materials to use memory safe languages? This includes ensuring that C and C++, when taught, are treated as dangerous, regardless of how pervasive they are in existing codebases.
  • In that vein, I’d like to give kudos here to CMU for offering CS 112— an introductory programming course taught in Python taken by many students across the university. Introducing students to the benefits of programming in a memory safe language is a key step forward.

Second, could you weave security through all computer software coursework?

  • There’s often a knowledge, skills, and experience gap between new hires and what is needed at their first jobs. Some of the larger companies have security training for new hires to ensure they understand how to code safely, always with an intelligent adversary in mind. Meanwhile, just one out of the top twenty undergraduate programs in computer science requires a security course as a graduation requirement. Which one? UC San Diego. As it stands, at most schools, a student can earn a computer science degree without learning the fundamentals of safety and security. I urge every university to make taking a security course a graduation requirement for all computer science students. Better still, don’t just make security a separate class, but make it part of every class.
  • I’d like to recognize CMU for being a leader here, integrating security for its core classes. Freshmen taking CS 122, for instance, learn about memory safety bugs like buffer overflows. I’d love to see how we can help standardize this kind of education into curricula across the country.
  • Civil, mechanical, and electrical engineers all take a substantial course load around thinking critically about safety: from understanding tolerances and safety margins to rigorously analyzing failures, safety is a critical part of engineering education. Skills for reliably and securely engineering computer software are critical parts of national security. We must work together to instill these skills into the engineers who will manufacture our future technology.
  • CMU also deserves credit for its focus on software as an engineering discipline. CMU researchers have made significant contributions to advancing the state of the art in software engineering and programming language design. I challenge you to think about how to go further in making that work accessible to all students and integrating it deeper into the standard computer science curriculum.

Third, how can you help the open-source community?

  • Are there opportunities to migrate CMU sponsored open-source projects to memory safe languages? To require all published research code to be written in memory safe languages? To build research opportunities and hands-on classroom learning around enhancing the safety of key open-source projects? The open-source commons is a key foundation of our software ecosystem and universities are well suited to invest in making sure that foundation is up to code.

And finally, could you find a way to help all developers and all business leaders make the switch?

  • Can we create better tooling for migrating to memory safe code from legacy code bases? Are there ways to make formal verification of software safety easy to deploy at scale? These questions have drawn research attention for decades, but they are only growing in importance as software is further embedded into the very foundations of our society. More tactically, you can help produce clear technical guidance—in partnership with CISA—on how developers can radically improve the quality and safety of their code.
  • You can partner also with your colleagues here in the business school on management guidance to help business leaders understand what it takes to reinforce a culture of embracing safety and security as a matter of product quality.

These are big challenges, but ones that deserve our full attention. Steps taken today at this university and universities around the country can help spur an industry-wide change towards memory safe languages and add more engineering rigor to software development which in turn, will help protect all technology users. It’s critical that students have a strong bias to build safety into every system, which will pay dividends in the long run.

Finally, to all the students in the room.

Given the catastrophic costs of cyber-attacks affecting American businesses, governments, and citizens, we need future leaders like you to find ways to turbocharge the transformation to memory safe systems, and more broadly to systems that we know to be secure by design.

There are many ways you can help solve these challenges. Maybe you go work at a tech company—or even start your own—and write memory safe code, focused on advancing the principles of security we discussed today. Remember that you should take pride in the safety of the code you write—think of it as “part of your brand” as an excellent software engineer.  And maybe you take what you learn today and help educate your fellow students on security and encourage your peers to write memory safe code.

Or maybe you decide to come work with us at CISA. We have several CMU alums that did just that. We need talented individuals like you all to help build our team as we continue to increase our capabilities, and most importantly, to help us forge a new approach around technology product safety. My team is here today, and I’d encourage you to stop by their table to talk with them if you want to learn more about working at CISA. One common value we share with you all is that we all put our heart into our work.

As we started with a story, I want to end with one, though this is less a story than a tale—a cautionary one at that.

Imagine a world where none of the things we talked about today come to pass, where the burden of security continues to be placed on consumers, where technology manufacturers continue to create unsafe products or upsell security as a costly add-on feature, where universities continue to teach unsafe coding practices, where the services we rely on every day remain vulnerable. This is a world that our adversaries are watching carefully and hoping never changes.

Because this is a world where another unprovoked invasion of a peaceful country by another much more powerful adversary—an adversary that has watched and learned from the endless missteps of Russia in its criminal war against Ukraine—might very well be coupled with the explosion of multiple U.S. gas pipelines; the mass pollution of our water systems; the hijacking of our telecommunications systems; the crippling of our transportation nodes—all designed to incite chaos and panic across our country and deter our ability to marshal military might and citizen will.

Such a scenario of attacks against our critical infrastructure in the event of a Chinese invasion of Taiwan is unfortunately not terribly far-fetched, but it is one we can prevent, if we come together, collectively as a nation, across our businesses and across our universities, to put our heart into the hard work of achieving safe, secure, and resilient infrastructure for the American people. 

Thank you again for the opportunity to speak with you today; I look forward to continuing the conversation with Professor Mayer and hearing your thoughts.

Wednesday, 22 February 2023

FTC New Office of Technology

The U.S. Federal Trade Commission has announced the creation of a Technology Office to support its work. The Press Release states:

The Federal Trade Commission today launched a new Office of Technology that will strengthen the FTC’s ability to keep pace with technological challenges in the digital marketplace by supporting the agency’s law enforcement and policy work. 

“For more than a century, the FTC has worked to keep pace with new markets and ever-changing technologies by building internal expertise," said Chair Lina M. Khan. "Our office of technology is a natural next step in ensuring we have the in-house skills needed to fully grasp evolving technologies and market trends as we continue to tackle unlawful business practices and protect Americans." 

The Office of Technology will have dedicated staff and resources, and will be headed by Chief Technology Officer Stephanie T. Nguyen.  

“I’m honored to lead the FTC’s Office of Technology at this vital time to strengthen the agency’s technical expertise and meet the quickly evolving challenges of the digital economy,” said Nguyen. “I look forward to continuing to work with the agency’s talented staff and building our team of technologists.”  

The Office of Technology will boost the FTC’s expertise to help the agency achieve its mission of protecting consumers and promoting competition. Specifically, the new office will: 

  • Strengthen and support law enforcement investigations and actions: The office will support FTC investigations into business practices and the technologies underlying them. This includes helping to develop appropriate investigative techniques, assisting in the review and analysis of data and documents received in investigations, and aiding in the creation of effective remedies. 
  • Advise and engage with staff and the Commission on policy and research initiatives: The office will work with FTC staff and the Commission to provide technological expertise on non-enforcement actions including 6(b) studies, reports, requests for information, policy statements, congressional briefings, and other initiatives.  
  • Highlight market trends and emerging technologies that impact the FTC’s work: The office will engage with the public and external stakeholders through workshops, research conferences, and consultations and highlight key trends and best practices. 

The creation of the Office of Technology builds on the FTC’s efforts over the years to expand its in-house technological expertise, and it brings the agency in line with other leading antitrust and consumer protection enforcers around the world. 

The Commission voted 4-0 to approve the creation of the Office of Technology. 

Call for Papers: Intangible Corporate IP Rights Assets: Finance and Governance for Sustainability Transitions

Professor Janice Denoncourt has sent us this call for papers for the LAWS MDPI journal.  Details are below. 

Call for submissions:  Special Edition of LAWS MDPI - sister journal of Sustainability MDPI 

Associate Professor Dr Janice Denoncourt will Guest Edit a special issue of LAWS MDPI (ISSN 2075-471X), an open access, sister journal of Sustainability MDPI), together with members of Nottingham Law School’s Intellectual Property Research Group.  LAWS MDPI is tracked for impact and indexed in SCOPUS.  

The Special Edition is entitled: Intangible Corporate Intellectual Property Rights Assets: Finance and Governance for Sustainability Transitions 

The Special Law Journal Issue is now live and open for submissions, see the link below.  The deadline for manuscripts is 31 August 2023.   

It is a common understanding that the magnitude of corporate ownership of intangible intellectual property rights (patents, trademarks, copyright, trade secrets and designs) has a pervasive role in the modern business environment.  Intellectual property rights (IPRs) are fundamental, yet intangible resources for explaining various aspects of economic value creation, critical for fostering resilience, growth and corporate longevity. However, varying degrees of knowledge and understanding of IPRs inhibit management and corporate communication between firms, their shareholders and other stakeholders in both the traditional financial accounts and narrative corporate reports.

In terms of corporate governance research reference points, a responsible business considers society, the economy and the environment whilst promoting the success of the firm.  Responsible businesses acknowledge shareholder primacy but are also driven by many other important, non-financial information (NFI) considerations such as the strategic and ethical use of their powerful IPR portfolios.    Sustainability and NFI are increasingly central to the decision-making of most business leaders, investors, regulators, politicians, consumers. Further, monopolistic IPRs have the potential to provide a significant long-term competitive advantage. Indeed, capital market participants progressively prioritize disclosure and transparency in this type of NFI. However, there is a surprising lack of material quantitative and qualitative information made available by companies, and even less about how corporate IPRs connect to sustainable transitions and responsible business.

This Special Issue will examine intangible management, corporate reporting, materiality and transparency with respect to the disclosure of non-financial narrative information (NFI) in annual reports and other grey literature.  The NFI will focus primarily on corporate intangible IPR assets (patents, trademarks, copyright, trade secrets and design) with the aim of developing a better understanding of how they support sustainable transition contexts.

From corporate annual reports to corporate websites, the dissemination of knowledge regarding corporate intangible assets is affected by various types of regulation, or the lack thereof. This may be legislation in the form of the Companies Act 2006 UK as amended (and its equivalents in other jurisdictions), intellectual property laws or accounting rules.  There is an evolving debate regarding the scope of corporate disclosure and non-financial information norms, i.e., what should mandatorily or voluntarily be made public.  Thus, corporate intangible asset reporting presents a complex area with little research-based evidence to guide company directors, preparers, accountants, auditors, lawyers and IP professionals.

This Special Issue brings together academic lawyers, business scholars and relevant IP and other professionals who share a common interest in the role of intangible IPRs in corporate governance and reporting. We are open to authors representing various methodological approaches, from traditional legal analysis to interdisciplinary and multidisciplinary approaches and methodologies.  We welcome authors presenting different national and regional perspectives.  Researchers are encouraged to submit articles that showcase evidence regarding how best to regulate corporate IPR asset disclosures and materiality, with a view to ensuring corporate accountability for statements and claims made and avoid greenwashing. We look forward to receiving your contributions.

Associate Professor Dr. Janice Denoncourt, Nottingham Law School, UK
Dr. Onyeka Nwoha, Nottingham Law School, UK
Guest Editors

Michelle Okyere, PhD Candidate, Nottingham Law School, UK
Guest Editor Assistant

Wednesday, 25 January 2023

US DOJ Sues Google for Anticompetitive Conduct in Advertising Practices

The U.S. Department of Justice has brought a competition suit against Google concerning its internet advertising practices.  The DOJ press release states:

Today, the Justice Department, along with the Attorneys General of California, Colorado, Connecticut, New Jersey, New York, Rhode Island, Tennessee, and Virginia, filed a civil antitrust suit against Google for monopolizing multiple digital advertising technology products in violation of Sections 1 and 2 of the Sherman Act.

Filed in the U.S. District Court for the Eastern District of Virginia, the complaint alleges that Google monopolizes key digital advertising technologies, collectively referred to as the “ad tech stack,” that website publishers depend on to sell ads and that advertisers rely on to buy ads and reach potential customers. Website publishers use ad tech tools to generate advertising revenue that supports the creation and maintenance of a vibrant open web, providing the public with unprecedented access to ideas, artistic expression, information, goods, and services. Through this monopolization lawsuit, the Justice Department and state Attorneys General seek to restore competition in these important markets and obtain equitable and monetary relief on behalf of the American public.

As alleged in the complaint, over the past 15 years, Google has engaged in a course of anticompetitive and exclusionary conduct that consisted of neutralizing or eliminating ad tech competitors through acquisitions; wielding its dominance across digital advertising markets to force more publishers and advertisers to use its products; and thwarting the ability to use competing products. In doing so, Google cemented its dominance in tools relied on by website publishers and online advertisers, as well as the digital advertising exchange that runs ad auctions.

“Today’s complaint alleges that Google has used anticompetitive, exclusionary, and unlawful conduct to eliminate or severely diminish any threat to its dominance over digital advertising technologies,” said Attorney General Merrick B. Garland. “No matter the industry and no matter the company, the Justice Department will vigorously enforce our antitrust laws to protect consumers, safeguard competition, and ensure economic fairness and opportunity for all.”

“The complaint filed today alleges a pervasive and systemic pattern of misconduct through which Google sought to consolidate market power and stave off free-market competition,” said Deputy Attorney General Lisa O. Monaco. “In pursuit of outsized profits, Google has caused great harm to online publishers and advertisers and American consumers. This lawsuit marks an important milestone in the Department’s efforts to hold big technology companies accountable for violations of the antitrust laws.”

“The Department’s landmark action against Google underscores our commitment to fighting the abuse of market power,” said Associate Attorney General Vanita Gupta. “We allege that Google has captured publishers’ revenue for its own profits and punished publishers who sought out alternatives. Those actions have weakened the free and open internet and increased advertising costs for businesses and for the United States government, including for our military.”

“Today’s lawsuit seeks to hold Google to account for its longstanding monopolies in digital advertising technologies that content creators use to sell ads and advertisers use to buy ads on the open internet,” said Assistant Attorney General Jonathan Kanter of the Justice Department’s Antitrust Division. “Our complaint sets forth detailed allegations explaining how Google engaged in 15 years of sustained conduct that had — and continues to have — the effect of driving out rivals, diminishing competition, inflating advertising costs, reducing revenues for news publishers and content creators, snuffing out innovation, and harming the exchange of information and ideas in the public sphere.”

Google now controls the digital tool that nearly every major website publisher uses to sell ads on their websites (publisher ad server); it controls the dominant advertiser tool that helps millions of large and small advertisers buy ad inventory (advertiser ad network); and it controls the largest advertising exchange (ad exchange), a technology that runs real-time auctions to match buyers and sellers of online advertising.

. . . [Image removed] . . .

Google’s anticompetitive conduct has included:

  • Acquiring Competitors: Engaging in a pattern of acquisitions to obtain control over key digital advertising tools used by website publishers to sell advertising space;
  • Forcing Adoption of Google’s Tools: Locking in website publishers to its newly-acquired tools by restricting its unique, must-have advertiser demand to its ad exchange, and in turn, conditioning effective real-time access to its ad exchange on the use of its publisher ad server;
  • Distorting Auction Competition: Limiting real-time bidding on publisher inventory to its ad exchange, and impeding rival ad exchanges’ ability to compete on the same terms as Google’s ad exchange; and
  • Auction Manipulation: Manipulating auction mechanics across several of its products to insulate Google from competition, deprive rivals of scale, and halt the rise of rival technologies.

As a result of its illegal monopoly, and by its own estimates, Google pockets on average more than 30% of the advertising dollars that flow through its digital advertising technology products; for some transactions and for certain publishers and advertisers, it takes far more. Google’s anticompetitive conduct has suppressed alternative technologies, hindering their adoption by publishers, advertisers, and rivals.

The Sherman Act embodies America’s enduring commitment to the competitive process and economic liberty. For over a century, the Department has enforced the antitrust laws against unlawful monopolists to unfetter markets and restore competition. To redress Google’s anticompetitive conduct, the Department seeks both equitable relief on behalf of the American public as well as treble damages for losses sustained by federal government agencies that overpaid for web display advertising. This enforcement action marks the first monopolization case in approximately half a century in which the Department has sought damages for a civil antitrust violation.

In 2020, the Justice Department filed a civil antitrust suit against Google for monopolizing search and search advertising, which are different markets from the digital advertising technology markets at issue in the lawsuit filed today. The Google search litigation is scheduled for trial in September 2023.

Google is a limited liability company organized and existing under the laws of the State of Delaware, with a headquarters in Mountain View, California. Google’s global network business generated approximately $31.7 billion in revenues in 2021. Google is owned by Alphabet Inc., a publicly traded company incorporated and existing under the laws of the State of Delaware and headquartered in Mountain View, California.


Friday, 6 January 2023

US FTC to Ban Noncompete Agreements?

The U.S. Federal Trade Commission has proposed a rule which would essentially bar noncompete agreements.  The FTC’s press release states:

The Federal Trade Commission proposed a new rule that would ban employers from imposing noncompetes on their workers, a widespread and often exploitative practice that suppresses wages, hampers innovation, and blocks entrepreneurs from starting new businesses. By stopping this practice, the agency estimates that the new proposed rule could increase wages by nearly $300 billion per year and expand career opportunities for about 30 million Americans.

The FTC is seeking public comment on the proposed rule, which is based on a preliminary finding that noncompetes constitute an unfair method of competition and therefore violate Section 5 of the Federal Trade Commission Act.

“The freedom to change jobs is core to economic liberty and to a competitive, thriving economy,” said Chair Lina M. Khan. “Noncompetes block workers from freely switching jobs, depriving them of higher wages and better working conditions, and depriving businesses of a talent pool that they need to build and expand. By ending this practice, the FTC’s proposed rule would promote greater dynamism, innovation, and healthy competition.”

Companies use noncompetes for workers across industries and job levels, from hairstylists and warehouse workers to doctors and business executives. In many cases, employers use their outsized bargaining power to coerce workers into signing these contracts. Noncompetes harm competition in U.S. labor markets by blocking workers from pursuing better opportunities and by preventing employers from hiring the best available talent.

“Research shows that employers’ use of noncompetes to restrict workers’ mobility significantly suppresses workers’ wages—even for those not subject to noncompetes, or subject to noncompetes that are unenforceable under state law," said Elizabeth Wilkins, Director of the Office of Policy Planning. “The proposed rule would ensure that employers can’t exploit their outsized bargaining power to limit workers’ opportunities and stifle competition.”

The evidence shows that noncompete clauses also hinder innovation and business dynamism in multiple ways—from preventing would-be entrepreneurs from forming competing businesses, to inhibiting workers from bringing innovative ideas to new companies. This ultimately harms consumers; in markets with fewer new entrants and greater concentration, consumers can face higher prices—as seen in the health care sector.

To address these problems, the FTC’s proposed rule would generally prohibit employers from using noncompete clauses. Specifically, the FTC’s new rule would make it illegal for an employer to:

  • enter into or attempt to enter into a noncompete with a worker;
  • maintain a noncompete with a worker; or
  • represent to a worker, under certain circumstances, that the worker is subject to a noncompete.

The proposed rule would apply to independent contractors and anyone who works for an employer, whether paid or unpaid. It would also require employers to rescind existing noncompetes and actively inform workers that they are no longer in effect.

The proposed rule would generally not apply to other types of employment restrictions, like non-disclosure agreements. However, other types of employment restrictions could be subject to the rule if they are so broad in scope that they function as noncompetes.

This NPRM aligns with the FTC’s recent statement to reinvigorate Section 5 of the FTC Act, which bans unfair methods of competition. The FTC recently used its Section 5 authority to ban companies from imposing onerous noncompetes on their workers. In one complaint, the FTC took action against a Michigan-based security guard company and its key executives for using coercive noncompetes on low-wage employees. The Commission also ordered two of the largest U.S. glass container manufacturers to stop imposing noncompetes on their workers because they obstruct competition and impede new companies from hiring the talent needed to enter the market. This NPRM and recent enforcement actions make progress on the agency’s broader initiative to use all of its tools and authorities to promote fair competition in labor markets.

The Commission voted 3-1 to publish the Notice of Proposed Rulemaking, which is the first step in the FTC’s rulemaking process. Chair Khan, Commissioner Rebecca Kelly Slaughter and Commissioner Alvaro Bedoya issued a statement. Commissioner Slaughter, joined by Commissioner Bedoya, issued an additional statement. Commissioner Christine S. Wilson voted no and also issued a statement.

The NPRM invites the public to submit comments on the proposed rule. The FTC will review the comments and may make changes, in a final rule, based on the comments and on the FTC’s further analysis of this issue. Comments will be due 60 days after the Federal Register publishes the proposed rule. The public comment period will be open soon.

The proposed rule states [I’ve modified this post to include the entire rule.]:

910.1 Definitions

(a) Business entity means a partnership, corporation, association, limited liability company, or other legal entity, or a division or subsidiary thereof.

(b) Non-compete clause.

(1) Non-compete clause means a contractual term between an employer and a worker that prevents the worker from seeking or accepting employment with a person, or operating a business, after the conclusion of the worker’s employment with the employer.

(2) Functional test for whether a contractual term is a non-compete clause. The term non-compete clause includes a contractual term that is a de facto non-compete clause because it has the effect of prohibiting the worker from seeking or accepting employment with a person or operating a business after the conclusion of the worker’s employment with the employer. For example, the following types of contractual terms, among others, may be de facto non-compete clauses:

i. A non-disclosure agreement between an employer and a worker that is written so broadly that it effectively precludes the worker from working in the same field after the conclusion of the worker’s employment with the employer.

ii. A contractual term between an employer and a worker that requires the worker to pay the employer or a third-party entity for training costs if the worker’s employment terminates within a specified time period, where the required payment is not reasonably related to the costs the employer incurred for training the worker.

(c) Employer means a person, as defined in 15 U.S.C. 57b-1(a)(6), that hires or contracts with a worker to work for the person.

(d) Employment means work for an employer, as the term employer is defined in paragraph (c) of this section.

(e) Substantial ownersubstantial member, and substantial partner mean an owner, member, or partner holding at least a 25 percent ownership interest in a business entity.

(f) Worker means a natural person who works, whether paid or unpaid, for an employer. The term includes, without limitation, an employee, individual classified as an independent contractor, extern, intern, volunteer, apprentice, or sole proprietor who provides a service to a client or customer. The term worker does not include a franchisee in the context of a franchisee-franchisor relationship; however, the term worker includes a natural person who works for the franchisee or franchisor. Non-compete clauses between franchisors and franchisees would remain subject to Federal antitrust law as well as all other applicable law.

910.2 Unfair Methods of Competition

(a) Unfair methods of competition. It is an unfair method of competition for an employer to enter into or attempt to enter into a non-compete clause with a worker; maintain with a worker a non-compete clause; or represent to a worker that the worker is subject to a non-compete clause where the employer has no good faith basis to believe that the worker is subject to an enforceable non-compete clause.

(b) Existing non-compete clauses.

(1) Rescission requirement. To comply with paragraph (a) of this section, which states that it is an unfair method of competition for an employer to maintain with a worker a non-compete clause, an employer that entered into a non-compete clause with a worker prior to the compliance date must rescind the non-compete clause no later than the compliance date.

(2) Notice requirement.

(A) An employer that rescinds a non-compete clause pursuant to paragraph (b)(1) of this section must provide notice to the worker that the worker’s non-compete clause is no longer in effect and may not be enforced against the worker. The employer must provide the notice to the worker in an individualized communication. The employer must provide the notice on paper or in a digital format such as, for example, an email or text message. The employer must provide the notice to the worker within 45 days of rescinding the non-compete clause.

(B) The employer must provide the notice to a worker who currently works for the employer. The employer must also provide the notice to a worker who formerly worked for the employer, provided that the employer has the worker’s contact information readily available.

(C) The following model language constitutes notice to the worker that the worker’s non-compete clause is no longer in effect and may not be enforced against the worker, for purposes of paragraph (b)(2)(A) of this section. An employer may also use different language, provided that the notice communicates to the worker that the worker’s non-compete clause is no longer in effect and may not be enforced against the worker.

A new rule enforced by the Federal Trade Commission makes it unlawful for us to maintain a non-compete clause in your employment contract. As of [DATE 180 DAYS AFTER DATE OF PUBLICATION OF THE FINAL RULE], the non-compete clause in your contract is no longer in effect. This means that once you stop working for [EMPLOYER NAME]:

  • You may seek or accept a job with any company or any person—even if they compete with [EMPLOYER NAME].
  • You may run your own business—even if it competes with [EMPLOYER NAME].
  • You may compete with [EMPLOYER NAME] at any time following your employment with [EMPLOYER NAME].

The FTC’s new rule does not affect any other terms of your employment contract. For more information about the rule, visit https://www.ftc.gov/legal-library/browse/federal-register-notices/non-compete-clause-rulemaking.

(3) Safe harbor. An employer complies with the rescission requirement in paragraph (b)(1) of this section where it provides notice to a worker pursuant to paragraph (b)(2) of this section.

910.3 Exception

The requirements of this Part 910 shall not apply to a non-compete clause that is entered into by a person who is selling a business entity or otherwise disposing of all of the person’s ownership interest in the business entity, or by a person who is selling all or substantially all of a business entity’s operating assets, when the person restricted by the non-compete clause is a substantial owner of, or substantial member or substantial partner in, the business entity at the time the person enters into the non-compete clause. Non-compete clauses covered by this exception would remain subject to Federal antitrust law as well as all other applicable law.

910.4 Relation to State Laws

This Part 910 shall supersede any State statute, regulation, order, or interpretation to the extent that such statute, regulation, order, or interpretation is inconsistent with this Part 910. A State statute, regulation, order, or interpretation is not inconsistent with the provisions of this Part 910 if the protection such statute, regulation, order, or interpretation affords any worker is greater than the protection provided under this Part 910.

The proposed rule itself is interesting because of its breadth.  It does not make a distinction based on the reasonableness of the restriction, such as taking into account time, geographic scope or level of employment of the worker, such as an executive or researcher.  It does not make a distinction between types of businesses, such as research intensive industries.  It also seems to leave a number of questions open concerning the protection of trade secrets and other valuable know-how.  In some ways the rule is a double-edged sword—a company may lose employees, but may also gain them.  It does seem that it may favor companies with the resources to lure employees of competitors away.  The question of competition between countries and the protection of trade secrets is fascinating as well.  Interestingly, the noncompete rule seems to include agreements for additional consideration such as payment for the agreement not to compete.