Monday, 26 September 2011
A panel of experts will comment briefly on the paper (which is based on Galit’s PhD thesis) before it’s thrown open to the floor for general discussion. Mr Justice Arnold (Patents Court, England and Wales), Professor Jo Gibson (Intellectual Property Institute and Queen Mary Intellectual Property Research Institute) and Chris Stothers (IBIL and Arnold & Porter) will be there and it is hoped that the Intellectual Property Institute's Economics Unit will also be represented.
Refreshments will be provided and registration is free. If you'd like to attend, please email Jeremy at the IPKat here and tell him. He will acknowledge your email when he can.
Saturday, 24 September 2011
The challenge of creating durable brands, especially those with traction outside of one's home territory, is not unique to Chinese companies. But the sheer size and potential international reach of Chinese companies makes their branding potential a matter of particular interest. It is against this backdrop that I found some intriguing insights in an article that recently appeared in the September 3rd issue of The Economist ("Privatisation in China: Capitalism Confined") here. The focus of the article, based on a study by Professors Jie Gan, Yan Guo and Chenggang Xu, is a typology of privatisation of Chinese companies. The first category contains massive infrastructure and utlility providers (such as banks, transport, energy and telecoms). In effect, these companies still remain largely within the purview of government ministries. Branding appears to be a minor or non-existent consideration.
Of more interest are two other categories: (i) joint ventures, comprised of a private (usually a foreign entity) together with a firm backed by the Chinese government; and (ii) companies that are largely in private ownership, but over which the government still exercises various forms of influence. At the risk of generalization, it appears that the second category of company is more attuned to branding matters than the first category. Even with that distinction, certain types of industries appear more likely to be concerned with branding issues than others, for instance, the automobile industry. Let's expand those thoughts.
Joint ventures--As has been often described (and sometimes decried), in the joint venture arrangement, the private, usually Western partner, seeks to gain access to the Chinese market in exchange for sharing its know-how with its Chinese partner. Criticism of this arrangement has centred on the charge that either by premeditated design or by later developments, the foreign partner is pushed aside or even squeezed out.
With respect to branding, most attention has been drawn to the car industry. As attributed to Michael Dunn, a car-industry consultant, the Chinese government has pushed the foreign company "to form "indigenous brand' joint ventures with intellectual-property and export rights." However, the article goes on to observe that "the efforts of the Chinese joint-venture partners to develop their own brands have yet to produce much success, despite their access to Western technology, vast resources and political pull."
The reason seems to be that, although the Chinese partner is interested in the economic well-being of the company, there is an absence of the long-term commitment that is required to build a brand. In particular, the Chinese representative is more likely to be tied to the government (indeed, that may well be the reason that he was chosen) and therefore it is also likely that he will return to a politically-related position. Under such circumstances, the chances that a joint-venture arrangement will successfully develop a strong brand appear weak.
Largely Private Company--Here the Chinese government appears to have less, or no direct involvement (indirect involvement and financial incentives are a different matter, but perhaps not so different than the situation with Western car companies as well). Again, focusing on the automobile industry, it is here that Chinese car companies have been most successful in brand development, pointing to the BYD, Chery and Geely brands. Further afield, the same situation is said to apply to ZTE and Huawei in the telecoms industry, Lenovo, the PC maker, and TCL, an electronics manufacturer. The common denominator for this has been ascribed to the different type of Chinese management in such companies--"[t]he bosses are not political appointees but charismatic businessmen in pursuit of commercial goals."
There is a potential darker side to these developments. The article goes on to decribe other types of "largely private" companies, most of which are in industries that are characterized as "strategic", such as energy, medical equipment and drugs. Here, industrial policy is more blatant, with protection against foreign challengers, liberal R&D support, and subsidized government purchasers. The jury is still out about whether such companies will able to develop their brands overseas successfully, once they venture out of their supportive local environment.
In this context, it would be instructive to learn whether any research has compared the trajectory of these companies with the success of both Japanese and Korean companies to create world-famous companies with powerful brands spanning the globe. More generally, it will be interesting to track the success of Chinese brands as a function of the degree that such companies are more, or less, privatised.
There's a little article over on the New York times about potential buyers renewing their interest in Yahoo. The company's investments in the Chinese e-commerce group Alibaba as well as its 35% stake in Yahoo Japan are often seen as potentially valuable assets. Indeed an investment group has already begun a USD 1.6 billion tender offer for shares in Alibaba (see here) which would value the company at USD 32 billion and Yahoo's stake at around USD 13 billion. Nobody has yet focussed on the IP rights in Yahoo. ThomsonInnovation are today recording 3051 individual patent families and currently 657 granted US patents - as well as a huge number of patent applications currently in process. The range of patent rights is fairly wide and a brief review shows that it covers many aspects of Internet technology. This author has not yet reviewed the portfolio in any detail, but given the volume of the portfolio, it would be surprising if there was not at least some golden nuggets in the bag. The recent Google/Motorola Mobility and Nortel deals showed the value of patents in the telecommunications sector. Much of their value has been due to the development of standards using patent technology. This has been encouraged by the telecommunications standards bodies who accept that stakeholders in the standards development process want to receive rewards based on licensing of their patents. On the other hand the Internet community has been much more reluctant to adopt standards on patent technology requiring payment of licenses. There's still nothing to stop a company from patenting its technology, but the W3 consortium wants to see royalty-free licenses as its patent policy clearly states. This means that patents may have a lower value than otherwise (as there is no mechanism to obtain royalties).
Tuesday, 20 September 2011
In this, the seventhin his series of posts for IP Finance, Keith Mallinson (WiseHarbor) reviews the recent history of software patent protection and the challenges made against it, concluding that the patent system is here to encourage investment in innovation by helping enable inventors to make a return on their risky investments and arguing that there is no evidence that patent systems are stifling innovation where inventions are implemented in software.
You can follow Keith on Twitter @WiseHarbor.
Software Patents – a Convenient Misnomer for those who Seek to Expropriate IP
It makes no sense to disqualify innovative technologies from patentability or limit the rights and remedies associated with those patents on the basis they can be implemented in software on general purpose processors rather than only on dedicated hardware. The “software patent” debate is largely a battle of ideology and business models between those who develop patented technologies that can be implemented in software and implementers who would rather not pay for the privilege of using others’ IP. I focus exclusively on technologies in this article because a large and rising proportion of manufactured products are increasingly software defined. Patentability for “business methods”, such as financial trading algorithms, while also contentious, is an entirely different matter.
Generosity at others’ expense
Google has made itself popular from the promise of free software with its Android smartphone operating system (OS) and WebM project with VP8 coder-decoder (codec) for video and Vorbis codec for audio. This promise is as in free beer (i.e., something for no payment) rather than merely free speech (i.e., being allowed to say what you like). The proposition obviously seems very appealing to many implementers, including software developers and device manufacturers, who like the idea of getting something for nothing.
However, this proposition is tricky because many software programs infringe the unencumbered rights of IP owners who justifiably do not wish to give away the fruits of their labour for nothing. In Free and Open Source Software (FOSS) the “free” refers to the freedom to copy and re-use the software, rather than to the price of the software. A fundamental requirement with Open-Source Software (OSS) is that “licenses shall not require a royalty or other fee”.
Whereas these licences generally require licensees to contribute their patented and copyrightable works royalty free, that is far from sufficient to ensure (F)OSS implementations will actually be completely free of charge to licensees. FOSS licenses are private contractual orderings that have no impact on the obligations of those IP holders outside any given contract’s reach. Many IP owners decide not to sign away their rights in (F)OSS licenses and others may be oblivious for a long time that specific (F)OSS software programs are infringing their rights. Despite efforts to prevent (F)OSS programs infringing un-liberated IP (that is, IP held by third parties outside the reach of the FOSS license), it is impossible to ensure this will not occur – particularly with respect to patents.
(F)OSS licensees may be found by courts of law or agencies such as the U.S. International Trade Commission (ITC) to be wilfully or otherwise infringing IP, with resulting legal costs, financial damages awards and even injunctions or exclusion orders preventing them from selling their products. Some of these licensees might not have expected this due to misleading statements from (F)OSS proponents and given that patent infringement was typically not a problem with packaged software, sold under license from the likes of Microsoft, that has prevailed for decades on PCs and elsewhere. Indemnities – derived from cross-licensing among various IP owners and commonly provided to licensees of proprietary software – are rarely available or as extensive with (F)OSS. In fact, attempts by either IP owners or FOSS distributors to enter into license agreements with third party IP holders have often been deemed antithetical to the FOSS movement (or event in conflict with the terms of FOSS licenses) and so they have, until recent months, been the exception rather than the rule.
Until very recently, Google appears to have provided little or nothing more than rhetorical support for its beleaguered Android licensees who are signing patent licenses or being sued for infringement or by proprietary software providers Apple with its iOS, Microsoft with Windows Phone and others. On the receiving end of the onslaught are HTC, Samsung and others implementing this open source OS. Perhaps Google will assist in various counter-suits following its recent purchase of 1,000 patents from IBM and acquisition of Motorola Mobility with a trove of 17,000 patents.
Free riders infringe
Tensions are running high between IP owners and those who shun paying patent fees for anything implemented in software including standards-based technologies. Already 12 patent owners have joined discussions to create a pool to collect royalties from those that implement the VP8 video codec standard. VP8 is based on technology developed by the Google acquisition On2 for its WebM project. This is purported to be completely royalty free (5th September 2011):
“Some video codecs require content distributors and manufacturers to pay patent royalties to use the intellectual property within the codec. WebM and the codecs it supports (VP8 video and Vorbis audio) require no royalty payments of any kind. You can do whatever you want with the WebM code without owing money to anybody.”Whereas there is no reason to prevent VP8 being developed free of any copyright or patent fees to any of its developers who agree to such terms, the codec is most likely infringing the patents of these 12 and many others. Non-assert provisions in VP8 licensing anticipate that Google has essential patents --and licensees might too. Different, independently developed, programs will likely not infringe software copyrights, where code is not copied, but all codecs implementing a given standard will infringe the same set of patents that are essential to that standard. Software developers, by definition, cannot design around essential patents when implementing a standard. Similar (or “competing”) standards may well have common technologies among them which are also covered by the same patents. This is particularly the case in Codec algorithms, which represent cumulative technological developments made over many years, including many players and at substantial costs. Different codec standards setting organisations (SSOs) can try to design around patents in formulating their standards. While this is possible to some extent, it is difficult, and impossible to eliminate all infringements while also seeking to achieve high-performance functionality exploiting latest technologies. In some cases, SSOs might not even be aware of some patents their standards are infringing.
MPEG LA licenses the H.264 video codec extensively. More than one thousand licensees have agreed royalty terms compensating 28 different licensors through a patent pool. These fees are due even where the software program implementing the codec is subject to royalty free copyright licensing, as is the case with the x264 – “a free software library and application for encoding video streams into the H.264/MPEG-4 AVC format, and is released under the terms of the GNU GPL [a royalty free licensing agreement]”.
With other codecs reading on hundreds of patents and significant similarities among codecs, it is also most likely VP8 infringes some of the patents that are also infringed by other video standards including H.264. The question is simply how many patents and which of them are infringed?
Changing the rules
Meanwhile, the patentability of any technologies and algorithms implemented in software are being significantly challenged with lobbying to policy makers around the world.
Those who argue against “software patents”, including some absurd and unsubstantiated claims, seek to invalidate issued and pending patents associated with, for example, smartphone features and video codec standards. Others have suggested that the perceived problems with “software patents” could be remedied by requiring that those patents be licensed on a royalty free basis in certain contexts (i.e., in standards). The fact that many standards-essential and other technologies implemented in software infringe numerous different patents, rather than typically just a few patents in a drug or simple mechanical device, is no justification to deny any patent rights at all. A combination of bilateral (i.e., cross licensing) and multilateral arrangements (i.e., with patent pools) can be used to negotiate rates and collect payments efficiently. The average aggregate royalty for video codecs on a DVD player is just a few dollars and aggregate standards-essential patent licensing on mobile phones rarely costs more than 10% of the wholesale product price. Moreover, the unsubstantiated claim that FOSS developers are prohibited by the terms of FOSS licenses from paying these royalties has been debunked and shown to be little more than an attempt by certain implementers to gain business model advantage.
Processors and software in everything
The products and services we all use every day are increasingly software defined and computer-intensive as microprocessors are included in many different manufactured items. Software predominantly implements the innovative algorithms for a wide variety of technological functions; from touch screen scrolling and bar code reading to turn-by-turn navigation. Just a few of numerous and varied examples also include anti-lock brakes, eco-friendly air conditioners, medical equipment, programmable lathes and toys.
The existence of microprocessors and computers over the last 30 years has fostered a marketplace for downstream development of computer programs performing a wide variety of functions with relatively low barriers to entry. For example, there are thousands and thousands of smartphone application developers. Many of these set themselves up with just a computer and a few software tools in their sitting rooms or dormitories.
Computer technologies with general purpose processors are increasingly substituting for application-specific designs. In some cases, state-of-the-art general processors make it possible to implement technologies (e.g., radio interference reduction, video compression or touch screen gesture recognition) significantly in software, in comparison to the more hardware-specific implementations such as with Application-Specific Integrated Circuits (ASICs) that were once required. Mobile communications protocols including GSM, HSPA and LTE can now be implemented in Software Defined Radios (SDR)s. SDRs are already commonplace in network equipment and increasingly in terminal devices such as phones and dongles. Similarly, whereas older codec implementations were significantly in hardware with dedicated signal processors and hardware accelerators, it is now possible to implement these in general processors with customised hardware and accelerators being used mostly for high-end devices.
Substituting software for hardware implementations of a given radio or codec technology is a design decision driven by considerations on feature performance, power consumption, heat dissipation, semiconductor die size, time-to-market and fixed versus variable manufacturing cost structure.
The speed, ease and low costs of coding in software— rather than having to design and fabricate dedicated hardware— does not negate the innovative steps, substantial costs and risks entailed in developing new ideas and technologies, regardless of their means of implementation. For example, development of anti-lock brakes requires lab work and drive testing under various conditions and medical instrumentation techniques (e.g., measurement of oxygen saturation in the blood) requires lab work and extensive clinical trials. Algorithms are first conceived, then modified and refined to improve performance, reliability and safety on the basis of this work. Software just happens to be an efficient and effective way to implement.
What is patentable?
So-called “software patents” do not actually depict software per se: instead they describe algorithms and processes that can be performed by a programmed computer. It is such computer-implemented techniques— not the software itself—that can be eligible for patent protection.
In Information and Communications Technology (ICT), it is the underlying useful, novel and non-obvious techniques that can be implemented in hardware or software to perform real world functionality—such as in radio communication, audio noise reduction, video encoding, and touch screen operation—to name just a few possibilities —that are potentially patentable. To be patent-eligible in the U.S., generally, a claimed method must involve a machine or a transformation of an article—that is, it must describe a series of steps that use physical means to produce a result or effect in the physical world. All the above examples and many other technical processes do just that – whether they are, or could be, implemented in hardware or software.
In 2002, the European Commission proposed a Directive on the patentability of computer-implemented inventions, but the European Parliament rejected the final draft with the result that national laws were not harmonised. The European Patent Office, which generally adapts its regulations to new EU law, has no reason or incentive to modify its practice of granting patents on certain computer-implemented inventions, according to its interpretation of the European Patent Convention and its implementing regulations.
Copyrights protect software owners from having their programs duplicated, but this does not prevent reverse-engineering of software-implemented innovations. Similarly, it is increasingly possible to implement previously hardware-based functions such as radio modems and video codecs on more general processors such as SDRs and with software-based rather than hardware-based graphics accelerators. It would be nonsensical to disqualify patented innovations from protection, simply because independent advances in processor and software technology make the former implementable on general purpose processors as well as dedicated hardware.
Openness and patents in standards
Whereas some assert that open standards should be royalty free, the International Telecommunications Union defines open standards, among other factors, as follows:
"Open Standards" are standards made available to the general public and are developed (or approved) and maintained via a collaborative and consensus driven process. "Open Standards" facilitate interoperability and data exchange among different products or services and are intended for widespread adoption.
Intellectual property rights (IPRs) – IPRs essential to implement the standard to be licensed to all applicants on a worldwide, non-discriminatory basis, either (1) for free and under other reasonable terms and conditions or (2) on reasonable terms and conditions (which may include monetary compensation). Negotiations are left to the parties concerned and are performed outside the SDO [standard- development organisation].
There are numerous open standards. However, IP policies differ widely among standards-setting organisations (SSOs). A relatively small number of SSOs have IPR policies that require participants to license essential patent claims on a royalty-free basis, but this can only bind those who elect to join those organisations and so standards implementers can be exposed to IP infringement claims by non-members. Most SSOs including those for mobile communications, video and audio codecs accept that patent owners can license their IP on a (Fair), Reasonable and Non-Discriminatory basis, including a royalty.
For example, H.264 is open in the sense that the specifications are freely available from a copyright perspective. One can distribute an implementation of H.264 freely as long as one abides by certain terms. However, implementers of the H.264 standard are required to pay patent royalties.
Software is no exception
There is no good reason to abandon the widespread practice of allowing patents on technologies that are implemented in software. The patent system is there to encourage investment in innovation by helping enable inventors to make a return on their risky investments. There is no evidence that patent systems are stifling innovation where inventions are implemented in software. On the contrary, innovation continues apace in ICT, as illustrated by the rapid development and extensive adoption of smartphones and video encoding technologies, to name just two from among numerous examples, as I have explained in my previous articles with IP Finance.
Monday, 19 September 2011
Perhaps it was appropriate that, shortly after buying a Kindle last week, I settled down into a transatlantic flight home, with the 12 September edition of the Wall Street Journal in hand. And there it was, staring me in the face on page 1 of the Marketplace section, an article entitled "e-Book Prices Prop Up Print Siblings." Now that I have a vested interest in the e-reader platform, the question of how e-book pricing differs from print books has become a matter of intense interest. The facts and figures as set out in the article make for interesting reading.
First, let's make a comparison between a hypothetical print book retailing at $26.00 and an e-book offering retailing at $12.99. Taking the print book first, from the $26.00 price one subtracts $13.00 for the retailer, $3.90 in royalty payments to the author and $3.25 for shipping and other handling, leaving a gross amount (don't forget returns) per unit sold of $5.85. By comparison, from the $12.99 retail price, one substracts $3.90 to the retailer, $2.27 in royalty payments to the author, and $0.90 for digital rights management, warehousing and prodution/distribution, leaving an amount per unit sold of $5.92 (returns are not a likely problem here).
These figures show that, to the extent that the e-reader publisher can increase the retail price, the greater will be its ultimate margin. Wait a minute, however! Wasn't the whole idea of the e-reader to offer the ubiquitous price of $9.99 per book. Raising the price from $9.99 seems antithetical to that pricing nirvana. What's the story here?
To appreciate these figures better, consider the major change that has taken place in the e-book industry. The starting point is described as "the wild days" of using the most popular titles as a loss leader (i.e., $9.99 or less), which days "are mostly over." In its place is an elevated e-reader price scheme anchored in what the article described as "agency pricing." As adopted by six major publishers and championed by Apple, "[p]ublishers worried about the deeply discounted $9.99 digital best-sellers promoted by Amazon.com Inc. agreed to set the consumer price of their digital titles. Under this model, retailers act as the agent for each sale and take 30%, returning 70% to the publisher."
The article then goes on to state that "[t]he major significance of agency pricing was that it made it impossible for a retailer to discount the price without the approval of the publisher." For discounting, read Amazon and its widespread $9.99 per book pricing policy, described as a means for building market share, even if "it actually lost money on the sale of many of the book industry's most popular titles."
Personally, I am bit disheartened because I had dreamed of using the Kindle platform to purchase book after book at that magic price of $9.99. Those dreams have been shattered. Standing back however from my disappointment, this apparently steady increase in the price of e-books is a fascinating example of a nascent industry seeking to find a workable pricing model.
On the one hand, we have the comment from an unidentified publishing executive that "'[i]f e-book prices land at 99 cents in the future we're not going to be in good shape." Certainly, the e-book platform carries with it the potential for downward pricing of books.
On the other hand, when is the difference in cost between the e-book and print versions of a book small enough to induce me to factor in the non-quantifiable tactile benefits of embracing a print version, the better to dog-ear, highlight and ultimately to place on the top row of my bookshelf? The problem is that, when I find out the answer to that question, the print version alternative may no longer be available. If so, then what will be the pressures on e-book pricing that will prevent an ever-increasing sticker price?
Don't get me wrong. As a published author, I am the last person to begrudge my publisher's bottom line. That said, as a reading consumer, I want to enjoy the benefits of the e-book platform at a reasonable (whatever that means) cost. Finding that balance remains an elusive goal. Something tells me that this so-called "agency pricing" model will not be the last word on the topic.
Friday, 16 September 2011
Press releases suggest that EVO has licensed its IP to the joint venture company while GKN has contributed financing, engineering and commercial resources. The new company aims to capture a share of the rapidly growing market for hybrid and electric vehicle systems and, in the words of EVO CEO David Latimer, “will be pivotal in establishing EVO as a key player in the fast-growing global market for electric drive components”.
It will be interesting to see how much value EVO realises through this joint venture manufacturing business model. Company documentation suggests that EVO could simply sell its share in the joint venture company to GKN at some point in the future: EVO has already sold 25% of its own shares to GKN as part of the deal, GKN indicating the total value of its investment at closing to be £5 million consideration in cash.
Tuesday, 6 September 2011
The number of companies dealing with IP rights and patents is in constant increase and they all follow a specific business strategy in order to yield profit. Given the importance now taken by patents in the strategy of all IT firms as well as the ever-growing media attention to the subject, it is becoming increasingly difficult to understand all the strategies employed by such companies and to recognize the most successful ones, i.e. the ones "achieving higher returns on patents by extracting direct profits or providing defensive leverage."
This thorny issue is however the delight of many IP strategists and Bruce Berman of NY-based consulting firm Brody Berman Associates is one of them. In a post entitled "innovative IP Models Generate Cash, Provide Alternatives" on his weblog IP Closeup (formerly known as IP insider), Bruce has been developing a graphic model explaining the current patent monetization landscape in collaboration with the IP Investment Group at Coller Captial, a London-based private equity firm and one the of the leading independent patent holders.
Look out for Bruce's upcoming column in the September edition of IAM magazine called "The Intangible Investor", where his model will be explained in details, so as to demonstrate that many operating companies can monetize their patent as successfully as non-practising entities: " Defraying costs associated with R&D, prosecution, PTO filings and litigation through a rights sale, purchase or partnership can provide valuable efficiencies and increase ROI, without necessarily increasing the risk of litigation or having to sue customers or vendors.”
...and for those who prefer fairytales, here is something for you too...
Monday, 5 September 2011
You may have noticed my silence over the past several weeks. The press of a publishing project and a mad dash to meet its submisison deadline have taken their toll on my time, but it is great to finally get back to sharing my thoughts.
My question is a simple one: how uniform is the phenomenon of file-sharing and other forms of unauthorized copying of music, television and movie contents? On the one hand, "world-is-flat" interconnectivity certainly provides a common platform for such conduct. In principle, it should not matter where one is located; provided that one has access to the platform, the means for engaging in unauthorized downloading or other forms of copying of contents are available at little cost and without significant effort.
Yet it appears that the leveling effect enabled by access to a common platform belies the fact that the nature of unauthorized downloading significantly varies from country to country. How much so was chronicled in a recent article that appeared in the August 20 issue of The Economist ("Spot the pirates: Illegal downloading and media investment") here. According to the article, "piracy has not exactly swept the world. It is endemic in some countries but a niche activity in others. In some places the tide is flowing; in others it appears to be ebbing."
What are the main characteristics of this national variance? First, unauthorized conduct is more common in the countries in the developing world. The most notable countries are China, Nigeria and Russia, in each of which it is claimed that nearly all such contents are unauthorized. Second, even within the "rich" countries, there is significant variation. Within Europe, for example, piracy is more much prevalent in the Mediterranean countries than in Northern Europe. Napster notwithstanding, America may be the least piratical country of them all.
How do we explain this variation? The article points to the following principal factors:
1. Cost--For example, on a GDP adjusted basis, a DVD copy of the blockbuster movie, "The Dark Knight", costs the equivalent of $75 in Russia and a whopping $663 in India. When one sets relative purchasing power against the desire to view the contents, and the availability to obtain a much cheaper bootlegged DVD copy, the decision to opt for the latter is, in a way, compelling.
2. Legal differences--Enforcement against unauthorized downloading varies widely across countries. For example, it is reported that in Germany it is relatively easy to have a fine imposed for illegal downloading, while in Spain it is nearly impossible to do so.
3. Culture--In a word, "[i]n some countries, copying is regarded as theft; in others, it is not." While the article does not go on to specify country examples, I recall that in the 1990s the argument was often made that China had a long cultural tradition that did not condemn copyring and even encouraged it. Maybe yes, maybe no. And while the notion of a cultural norm for or against copying is a bit amorphous, this factor may be the most "powerful" of all.
Unlke journalists and academics, media companies appear to be little interested in the "why"s; what occupies their attention are the perceived consequences of a national proclivity for or against unauthorized copyright. On that basis, Spain is being spurned by the big labels (and DVD sales are collapsing) while Germany continues to be a focus of investment.
Perhaps of particular interest is the situation in South Korea. After a percipitous decline in the middle of the last decade, recorded-music sales have actually increased in each of the last three years in South Korea. Draconian anti-piracy laws are pointed to as one major reason for this, but one should be wary about assigning too much weight to that factor. First, no clear evidence was offered that the increase in music sales was positively correlated with hightened enforcement activities. Second, other countries with firm anti-piracy laws seem to be doing less well than South Korea on the piracy front.
The focus of the article on the South Korean experience suggests that the situation there is the result of a unique set of circumstances that cannot be replicated elsewhere. Using the analogy to a bull market rally in the midst of a long-term bear market, the jury is still out as to whether, at the end of the day, unauthorized copying and downloading will be the norm, perceived cultural mores to the contrary not withstanding.
Under this view, perhaps the national variation described in the article is better attributed to the different pace of inter-generational attitudes about copying rather than broad national differences. If so, the industry could find itself waking up one day to a situation where there are no viable markets left. Stated otherwise, for the media content industry, a lot rides on the answer to the question just how durable these national differences regarding unauthorized copying really are.
Friday, 2 September 2011
In this, the sixth in his series of posts for IP Finance, Keith Mallinson (WiseHarbor) defends the non-interventionist approach which the European Union takes towards horizontal cooperation agreements against the criticisms levelled by the European Committee for Interoperable Systems (ECIS), which supports the mandatory disclosure of the most restrictive licensing terms for patented IP in the purportedly different “software” sector.
Artificial Distinction between Software and Telecoms for Essential IP Disclosure
In December 2010 the European Commission approved guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union (TFEU) for horizontal co-operation agreements. These guidelines lay out a comprehensive approach for conformity of standardisation agreements with Article 101 TFEU, creating a “safe harbour” while affording standard-setting organisations significant autonomy in setting policies for disclosure of IP and its licensing terms. They also provide guidance on policies for ex-ante disclosure of IP and most restrictive licensing terms.
Not everybody is happy with this non-interventionist approach. While to some extent accepting it for telecoms, some continue to campaign for mandatory disclosure of the most restrictive licensing terms for patented IP in the purportedly different “software” sector. The European Committee for Interoperable Systems (ECIS) circulated a dissenting opinion paper about this at a workshop recently hosted by the European Commission in Brussels. The event was convened to discuss practical experiences with standards setting organisation IPR policies that voluntarily permit or require the ex ante disclosure of licensing terms.
Whereas I refer to “software” versus telecoms to facilitate my analysis in this blog post, I refute ECIS’s distinction.
Illogical software exception
By attempting to bifurcate the debate on various issues with IP in standards, ECIS seeks to create a more IP-hostile regime with onerous disclosure requirements for “software” standards as opposed to telecoms standards. Political bargaining for its various demands might more easily prevail over logic, facts or the law in the “software” sector; but that does not make its position defensible.
On the contrary, the ECIS paper makes many sweeping and false generalisations without either evidence in support of its assertions or any credible explanations why these are relevant to whether or not the Commission should promote the ex ante disclosure of IP and its licensing terms in connection with “software standards”.
“…in the software industry, in contrast to the telecoms sector, there are generally fewer players holding IPRs relevant to the standard; product lifecycles are short (two years or less); innovation often proceeds through incremental development of standards-compliant products; many if not all hard IPRs – i.e., patents – can be developed around since alternative equally functional solutions generally exist; and IPR holders are amply rewarded if their technology is adopted for the standard through their lead time to market with a compliant implementation and the strong network effects that are often prevalent in the software sector.”
Telecoms network equipment and devices are now as software intensive as any computers. General purpose computers are pervasively interacting with and have become part of modern networks. Whereas batch-processing mainframes in the 1970s and PCs in the 1980s were largely offline devices, computers have increasingly become connected locally and beyond. Computers have accessed the Internet with corporate networks at work and with dial-up at home on a widespread basis since the 1990s. With the convergence between computing—including software—and telecoms networks, John Gage, Chief Researcher of Sun Microsystems for many years, famously coined the phrase “the network is the computer” and this became his company’s motto.
Consumer broadband connections for browsing, email, downloads, uploads and streaming have been the norm since the millennium. Broadband will also soon predominate on mobile phones. Today’s smartphones, including multi-core processors in many cases, have the computing power and software capabilities of PCs and games consoles such as the Xbox 360, launched only six years ago.With increasingly telecoms-centric “software”, ECIS’s distinctions between “software” and telecoms are absurd. Telecoms is an increasing part of “software” standards and the programs that implement them. For example, most new features and functionality in the HTML5 standard for web browsing are to enable richer, higher performance and more efficient usage over communications links. Mobile communications is particularly demanding in these respects.
HTML5 is particularly telecom-intensive with WebSockets (delivering real-time data and two-way communications), File Reading, File Writing and Systems Operations (facilitating interaction between the web and files/systems stored on a device) and Video Playing (including online streamed video) among other features. HTML5 implementations are being tightly coupled across all the hardware and software layers in smartphones and tablets. A major competitive objective for System on a Chip suppliers (SoC) such as Qualcomm, NVIDA and ST Ericsson is to ensure that video running in the browser with HTML5 (on various high-level operating systems) will be as fast and efficient (i.e., with respect to bandwidth used, network signalling and power consumption) as a “native applications” performing the same function.Elsewhere, even ECIS recognises most digital products include telecoms.
According to a message “About ECIS” from its chairman on his organisation’s web site:“Today, we all need to exchange information and remain in touch with others on a permanent basis. This demand for permanently networked communications capability among most digital products and services has made full interoperability an essential market condition for open, dynamic competition.”
Telecoms devices include powerful general purpose processors and more specialised processors for communications functions, as well as processors for video and graphics. Each of these requires extensive software. Technological advances including Software Defined Radios (SDRs) enable communications functions to be performed on more general purpose processors. Different radio communication protocols including as GSM, HSPA, CDMA2000 and LTE can be implemented in software while also using common radio frequency transceivers, amplifiers and antennas in some cases. With development of standards such as Advanced Telecommunications Computing Architecture, telecoms network equipment hardware is becoming increasingly standardised and general purpose with software increasingly the basis upon which innovative functionality is implemented.
Plentiful IP holders in software
Audio and video encoding and decoding are software intensive but are not telecoms functions per se; and yet there are many innovators who own patented technology used in standards that implement these functions. Audio and video “codecs” are still primarily used offline, in consumer electronic products such as DVDs, MP3 players and camcorders. PC and smartphone usage can be offline or online. Patent pool administrator MPEG LA lists 29 companies as licensors for patents included in the MPEG-4 Visual Patent Portfolio License. Although the pool’s coverage of essential IP for this video standard is generally regarded to be quite extensive, research indicates 71 companies have essential claims on MPEG-4. In contrast, according to a 2009 study by Fairfield Resources International, only 57 companies had declared ownership of IP essential to 3GPP cellular telecoms standards and computing hardware is no more patent intensive.
Hardware cycles quickly
Telecoms product lifecycles are as short as for “software” in many cases. Samsung, HTC, Motorola and others introduce many new mobile phones, as illustrated in my previous IP Finance posting. Old models are typically superseded or retired within a year or so even when successful. The HTC “4G” EVO is scheduled for “End of Life” just 15 months after its launch on the Sprint network. Motorola’s Droid smartphone models have not even lasted a year. Once operator distribution is withdrawn in the US, a product is effectively finished. New product improvements include more than just a few cosmetic tweaks. New phones have faster radio modems implementing more recent standards, more powerful applications processors and better graphics as well as later software releases at every level in the design architecture from SoC microcode upwards.
Similarly, the communications and computing hardware platforms upon which devices are based are also evolving rapidly with significant updates every year or two. Moore’s law still prevails with the processing power implemented by SoC suppliers in their processors doubling every two years. For example, Qualcomm’s Snapdragon and NVIDIA’s Tegra design wins for chips in new devices show that hardware platforms are advancing rapidly with major architectural changes such as introduction of multi-core processors.
In contrast, software platforms can be difficult to “evolve” quickly, efficiently and significantly enough to keep up with competition. For example, Nokia is abandoning Symbian, which was the market leading smartphone operating system (OS) until 2Q 2011, in favour of Microsoft’s Windows Phone OS. Research in Motion acquired QNX to replace the aging and cumbersome OS it uses on its BlackBerries.
Innovation’s long leaps and rapid steps
Telecoms standards have proceeded concurrently with a few big leaps and numerous small steps. In mobile communications, for example, there have been three of four major generations of standards. For example, in Europe, a multiplicity of “1G” analogue radio access standards were replaced by an entirely new digital “2G” standard based on a time division multiple access protocol with GSM in 1992.
One decade later, a code division multiple access protocol called WCDMA was launched for “3G” services. The latest new standard to be deployed commercially in the last year or two that is based on an orthogonal frequency division multiple access protocol is called LTE. Along the way, however there have been many and frequent incremental improvements. These embody hundreds of specifications, with a new standards release from 3GPP almost every year. Putting it simply, GSM was enhanced to include GPRS technology and then EDGE, and WCDMA was enhanced to include HSDPA, HSUPA and then HSPA+. Along the way, improvements have included all manner of features and functionality to increase radio spectrum efficiency and lower battery consumption, encrypt data flows, present caller-ID, deliver text, multimedia messages and connect to the Internet.
The latest major technological shift in mobile communications with introduction of LTE has moved quickly. In 2004, NTT DoCoMo of Japan proposed LTE as an international standard to succeed 3G technologies WCDMA and HSPA. In December 2008, the Rel-8 specification for LTE was frozen for new features, meaning only essential clarifications and corrections were permitted. Within one year, in December 2009, the world's first publicly available LTE services were launched by TeliaSonera in Stockholm and Oslo. There are now 24 commercial LTE networks in service and 166 firm commercial LTE network deployments in progress or planned in 62 countries, according to a July 2011 report from The Global mobile Suppliers Association (GSA).
According to the Worldwide Web Consortium (W3C), with work starting in 2003 the HTML5 standard will not be completed until 2022. Even though many parts of HTML5 will likely be widely used long before HTML5 is officially complete, this example illustrates that complex technology standards in “software” can take as long as or longer to complete than in telecoms. HTML4, DOM2 HTML, and XHTML1— the three specifications that HTML5 is intended to update and replace—were also a long time coming. Work on HTML4 started in early 1997. The specification itself took about two and a half years, and was published in late 1999. Work on XHTML started while this was happening, in 1998, and it was officially completed in 2000.
Standards-essential IP cannot be designed around within the context of a given standard because it is, by definition, necessary to implement that standard. This is the case for standards in telecoms or any other sector. Alternative approaches are, however, employed among competing standards such as in “3G”, as illustrated by HSPA, CDMA2000 1x EV-DO, LTE, TD-SCDMA, WiMAX (802.16) and defunct Flash OFDM (802.20).
In many cases, there is less scope to avoid particular technologies with the high level of participation in “software” standards such as HTML and MPEG-4. As I will illustrate in an upcoming IP Finance posting, alternatives, such as VP8 for video, are quite likely infringing many of the same patents as do competing standards. I will explore the conflict between the “royalty free” open source software business models and the patented IP which is infringed by such programs. The promise of free software is often not fulfilled with open source, as is illustrated with the current wave of litigation against Android implementers.
Ex-ante IP disclosure requirements cannot ensure standards bodies are exhaustively “identifying the need to design-arounds and avoiding patent ambushes” because those requirements cannot bind those who are outside the relevant standards body.
IP owners choose to monetise their rewards for investing in IP in a variety of ways including licensing for fees and by implementing the IP in products for sale. Whereas vertically-integrated manufacturers deserve to take advantage downstream when their IP is adopted, there is no reason that innovators who prefer to focus on upstream licensing should have their business model foreclosed. Public policies in the US and Europe do not favour one business mode over others and “software” should be no exception.
As the term implies, “network effects” are as prevalent with telecoms network technologies as they are in the “software” sector.
It is only some IP holders who show willingness to license software copyrights and patents on a royalty free basis. Patented IP in standards is mostly cross-licensed for little or no net payments, or licensed for a fee on a (Fair) Reasonable And Non-Discriminatory basis. Examples of extensively implemented non-telecoms standards IP licensing are in consumer electronics such as those with audio and video codecs including AVC, DVB-T, DVD-1, DVD-2, MPEG-2 and MPEG-4. These standards probably represent the most prevalent examples of (F)RAND licensing.
Audio buffs and the masses since the 1970s have benefited from innovative noise reduction for audio cassettes and in cinemas. More recent inventions include surround sound. Dolby Labs has specialised in developing these technologies. The company does not manufacture products. It licenses its IP to numerous consumer electronics manufacturers.
I also take issue with several other statements made in ECIS’s paper. Royalty free licensing is not always preferred by licensees: in some cases would-be licensees would rather pay royalties than agree to other proposed contract terms. As explained in my previous IP Finance posting, the ex-ante IP value is not just the incremental value for the licensee over the next best alternative. I also disagree that “[a]llowing injunctions to be given before the FRAND royalty rate or the validity of the patent is determined [by a court] would entirely alter the negotiating positions”. On the contrary, if the potential threat of injunctions is removed, infringers would be incentivised to take unreasonable positions and wait for the courts to make these determinations rather than to negotiate a FRAND licence.
ECIS also proposes another untested theory on IP disclosure called “ex-ante plus”. This article is already quite long enough, so I will leave my analysis on the fallacies and downfall of this approach for a future blog posting on IP Finance.
Final word on ex-ante disclosure
Ex-ante disclosure requirements do not necessarily increase transparency or reduce uncertainties on actual rates. On the contrary, as shown in my IP Finance postings on aggregate royalty rates and on fixing IP prices with royalty caps, disclosed “rack rates” are very misleading because they do not reflect cross-licensing and other realities in negotiating down from prices asked. As illustrated in an example presented by patent pool administrator SISVEL, adding up everybody’s disclosed prices can result in the nonsense of a theoretical aggregate royalty rate twice the average price of a licensed mobile phone, whereas actual aggregate rates are 9% or less in 3G. With asking prices differing so much from actual outcomes, disclosure can do more harm than good. ECIS seems to recognise these shortcomings in the context of telecoms while sidestepping the issue with “software”.