Thursday, 31 May 2012
We thank our reader for these links and for contributing to the discussion on this important subject.
Wednesday, 30 May 2012
Those of you who follow me in the blogosphere know that one of my continuing interests is how thought leaders in the entrepreneurship space view the role of IP. I assume that IP practitioners are not involved in IP for its own sake (as intellectually engaging as we may find it), but rather for the services that IP may provide in enabling innovation and creativity. If that is so, the question is whether the practice of IP carries with it a certain dysfunctional tunnel vision, the far end of which yields only a narrow view that fails to take into account the broader context.
One way that I constantly try to measure the potential dysfunctionality of my professional vision is to read or listen to thought leaders in the world of entrepreneurship. While I recognize that not every entrepreneurial idea need involve the kind of innovation that requires IP protection (particularly patent and trade secrets), it is still more likely than not that most successful entrepreneurs are presumably acting in an area that one can expect to attract IP protection. To test this belief, I have become an avid podcast listener of the weekly (during school term) podcast broadcasts offered by the DFJ Entrepreneurial Thought Leaders Seminar offered under the aegis of the Stanford Technology Ventures Program.
Since this lectures series and the related entrepreneurship program are lodged within the world-famous Stanford Engineering program and its cache enables it to attract world-class figures in innovation and entrepreneurship, it has always seemed to me that the narratives set out by these lecturers should be a useful barometer in measuring how they view the role of IP. Accordingly, I was particularly eager to hear the remarks of Dr. Kristina Johnson, one of America's most distinguished engineering personalities here.
Dr Johnson has a breathtaking resume, from Stanford Ph.D. to a university professorship, to scores of registered patents, to Deanship of the Engineering School at Duke University, to Provost at Johns Hopkins University, to a stint as an Under Secretary in the Department of Energy in the Obama administration, to her current involvement in an energy-related start-up. Given her broad engagement in just about every possible aspect of innovative activity, Dr Johnson seems to be an ideal observer about where IP stands in this world.
Listening to her comments, I was struck once again by the pronounced sense of disjunction between her view of IP and that of the professional IP community. In commenting about her own professional trajectory, Dr Johnson mentioned patents only once, and that was in the context of anecdotal description of an early idea of hers (that ultimately was not patented). While she mentioned in passing her numerous registered patents, we are not given any insights into how these inventions were exploited, if it all. While the Bayh-Dole Act here is briefly referred to, we are given no indication whether its enactment affected her inventive activity and/or whether it resulted in increased commercial exploitation of her inventions.
Dr. Johnson's overall narrative was a compelling one, but IP played only a minimal role in the tale. It was the Q&A that brought out her further thoughts about IP. The first question was asked by a member of the audience who identified himself as retired from a well-known industrial (tech?) company of another era. The question was simple: how did she see the role of IP? It would seem that the question derived from the same sense that I had in listening to her comments--something seemed to be missing in her narrative.
Dr Johnson's response was, more or less, to acknowledge trade secrets and patents (in that order as I recall), with the comment that patents served mainly to provide early stage "barrier to entry." No other potential benefits of patents were mentioned. One could claim that had she been given additional time or a heads-up on this question, she might have answered differently. That is possible, but for the record, her response was that the principal purpose of patents was as a barrier to entry. She added words to the effect that, in her view, patents are often overemphasized where it is really the overall know-how of the company and the ability to execute that ultimately determines the company's success. Stated otherwise, patents offer certain tactical benefits but little more.
There is one more aspect to my tale. Stanford is the site of the National Center for Engineering Pathways to Innovation (Epicenter) here, a program funded by the National Science Foundation, whose mission to infuse entrepreneurship and innovation skills into undergraduate engineering programs. During the podcast, Dr Johnson was mentioned as an advisor to the Epicenter. Keeping in mind that the Epicenter has as its goal the inculcation of innovation and entrepreneurship skills in engineering students, here is the question: How does the Epicenter view the role of IP in carrying out its mission? Does Dr Johnson reflect the consensus, or are there other views about the role of IP? If so, what are the views? Given the centrality of Stanford and Silicon Valley to engineering education as well as entrepreneurial activity, the answer to this question will have a material affect on the role of IP among the actors in that world.
"The transfer of copyright and related rights, industrial property rights, franchise rights or know-how is considered a provision of services and is subject to value added tax at the standard rate of 21%".It always seems strange to this blogger that an outright transfer of an intellectual property right should be regarded as the provision of a service.
Wednesday, 23 May 2012
I will admit: I am Pareto's principal devotee. The basic idea that numerous phenomena can be characterized by the "notion of the vital few" will account for the lion's share of something seems to resonate with my own anecdotal and analytical experience. Often referred to as the "80/20 rule" (such as "80% of one's sales come from 20% of one's customers"), I find myself finding Pareto-like results even when I am not really looking for them.
The eponymous principle derives from an observation made by Vilfredo Pareto that 80% of land in Italy (and other countries) was owned by 20% of the population. However, the name "Pareto principle" was in fact not coined by Pareto himself, but by a fascinating U.S. engineer named Joseph Juran, who story is itself worth retelling. Suffice to say that Juran's contributions to management based on his adaptation of Pareto's findings is an often under- appreciated milestone in the field here.
I would like to use the notion of the Pareto principle to consider a question that was asked on a recent "60 Second Tech" podcast produced by Scientific American magazine: How many people download only free or spend very little (a dollar or so) for smartphone apps? Citing a study by ABI Research here, the podcast reported that 70% of all users of smartphone apps download only free or virtually free apps, This, more or less, suggests a Pareto-like division, whereby only 30% of smartphone users ever pay for any apps that are not free or virtually so.
This result resonates with a finding that I remember hearing about Twitter, whereby only 20%-30% of Twitter users account for the lion's share of Tweets. Once again, Pareto seems to be popping up all over. But the ABI Research-inspired podcast reported additional data regarding the app-buying habits of smartphone users. Having regard to the release that accompanied the report, it turns out that only 3% of app users account for nearly 20% of all expenditures for apps, where the average outlay by users who have at least once paid for an app (including this small number of hard-core users) is approximately $14.00 per month. Pushing even further on these results, it turns out that the median monthly outlay for app users is only approximately $7.50, approximately 50% less than the average monthly expenditures of $14.00.
Consistent with a Pareto-like view of the world, there is certainly nothing Gaussian about the discrepancy between the mean and median amount of the monthly outlay. In a word, app developers who hope to cash in on their creative efforts are relying a very thin layer of app users. Moreover, there appears to be a clear distinction in the type of apps that will attract "high-roller" users. Most of these apps are what is called "a utility app", most frequently for business purposes. A second category of apps that attract paying customers are "iOS games monetized through strings of in-app purchases." At the other end, apps regarding sports and the like can expect to have little or no commercial traction.
The ABI Research report suggested two ways that app developers can somehow improve the commercial odds against enjoying even a semblance of commercial success:
1. Try to make sure that your app either supports or is supported by a web component.
2. Try to find a way to convince your customer that the app merits a long-term (by app standards) engagement by them.One way to do so is by the time-honoured practice of first giving the app away for free with the hope that you will develop a small yet highly devoted band of followers who will be willing to lay out monthly sums to ensure their continued access to the app and its updates or upgrades. I have to admit--this is all very depressing. My son is nearing the completion of a Computer Science degree and I wonder what I would advise him, should he announce one day that he has decided to work on developing smart phone apps. If not quite "blood, sweat and tears", should I counsel him on the dismal likelihood of success? Or should I simply wish him the best on his journey and assure him that he has always has a roof over his head, if all else fails?
More generally, I wonder whether the business model described above is sustainable in the longer run, whereby the promise of substantial revenues for the very few, together with the more broadly based challenge to develop an app that will be used by others, irrespective of whether such an app generates revenues, will continue.
More on the Pareto principle here.
Thursday, 17 May 2012
|OUP: a great opening|
for the right candidate
"a global leader in the provision of multimedia English language teaching and learning materials. Operating in over 100 countries, it reaches millions of teachers and students each year ..."
A "broad understanding of copyright law, including experience of publishing agreements" is among the items listed in the job specification -- but there's nothing to say that a legal qualification is required, so this might be a perfect position for a gifted and enthusiastic amateur.
For full details just click here. Closing date for applications is 3 June 2012.
Wednesday, 16 May 2012
The IP Finance weblog welcomes the latest guest post from Keith Mallinson (WiseHarbor) on a subject which he has really made his own: the subtle interplay of sometimes competing and sometimes congruent private and public interest in the shaping of the dynamics of the market for the licensing of technology in the information, communication and telecom sector. This post touches on a topic that has been the subject of all-too-little discussion in IP circles. IP rights help to establish the existence of winners in the marketplace -- but who gets to decide who those winners should be?
The Folly of Picking Winners in ICT
Government attempts to favour and promote certain business models, companies and technologies are justifiably criticised. The UK Cabinet Office’s proposed policy to mandate the use of only pre-selected, royalty-free standards in public ICT procurement is similarly flawed. This will limit choice by foreclosing many popular open standards, numerous products which adhere to them and companies who depend on upstream licensing revenues. The Open Standards Board responsible for implementing this policy will face significant governance challenges in ensuring impartially in standards selections. In contrast, free-market processes allowing competition among a much wider array of open standards and software licensing maximises customer choice across many different government departments, fosters innovation, reduces lifecycle costs and enables obsolete or poorly performing standards to be superseded.
approximately £18 billion in recent years. Direct and likely indirect consequences of this large purchaser on the ICT marketplace, such as explicitly or implicitly obliging citizens, as well as government suppliers of other goods and services, to adopt the same standards, would be significant with this policy.
Dirigisme versus facilitation
Governments have a history of making bad decisions in championing particular companies, technologies and business models. For example, the Inmos semiconductor company received £211 million from the UK government in the 1970s and 1980s with its strategy to produce commodity D-RAMs and develop its “transputer”, but the company foundered, did not become profitable after many years and was sold to SGS-Thomson in 1989. The UK is effectively nonexistent in semiconductor manufacturing today. The UK’s “fabless” semiconductor companies such as ARM, Picochip (acquired by Mindspeed Technologies in 2012) and Icera (acquired by NVIDIA in 2011) rely on partners including foreign “foundries” to fabricate their designs.
Minitel videotext online service in the 1980s by withdrawing phone books and spending billions giving away the terminals to citizens. The associated technological standards and equipment manufacturers made minimal headway with Minitel technologies abroad and were eclipsed by the advance of the Internet in the 1990s. Minitel provided consumers with their first means of online access. However, views on long-term benefits to French consumers are mixed. Resistance to replace the entrenched home-grown standard caused France to be a laggard in Internet adoption.
In contrast, supporting entire industry sectors where a nation has strategic strength is more justifiable and attracts widespread support from various commentators. For example, clustering of complementary and competitive companies can be beneficial. In these circumstances, market forces spur competitive behaviour, including some Schumpeterian “creative destruction”, which helps eliminate the sclerosis and risks that come with monoculture. For example, Silicon Valley in California provides a fertile technical and commercial environment in which various business models and many ICT companies, standards and products have flourished while others have failed.
Better for less
A key stated objective with the proposed Cabinet Office policy is to level the “playing field” for open source and proprietary software. It is, therefore, perverse that standards based on Fair Reasonable and Non-Discriminatory (FRAND) licensing and requiring patent fees should be the principle target for elimination with this policy. The policy will automatically also exclude many proprietary offerings that are based on those standards and which cannot practically be adapted to other, royalty-free, standards. In many cases, such standards are widely implemented by many suppliers and are used by the vast majority of business customers and consumers.
The cabinet office seeks to mandate specific royalty-free standards to achieve various objectives including cost reduction and avoiding vendor lock-in, as well as making ICT solutions fully interoperable. However, a report entitled Better for Less, published in 2010 by Liam Maxwell, now Deputy Government CIO and the proposed policy’s champion, identifies that most UK government ICT spending is with systems integration companies including HP/EDS, Fujitsu Services, Capgemini and IBM. The Government's over-reliance on large contractors for its IT needs combined with a lack of in-house skills is also a "recipe for rip-offs" according to a report by the Public Administration Select Committee (PASC) in July 2011.These suppliers are typically deeply embedded with long-term contracts that government finds difficult to unravel.
Software represents only a relatively small playing field in comparison to others in ICT spending. According to Forrester Research figures, market segments where open source software competes or combines with proprietary software products represent just 12.4% of $2.5 trillion total global business and government ICT expenditures including operating system software (1.0%), non-custom-built applications (6.7%) and middleware (4.7%). In comparison, IT services (11.6%) and outsourcing (9.8%) combined represent 21.5% of spending. Computer equipment represents 13.9%. The $2.5 trillion total appears to exclude very significant costs for internal staffing.
Software licensing costs are included even in modestly-priced PCs. The PASC report also indicated it was “ridiculous that some departments spend an average of £3,500 on a desktop PC”. A 2011 Cabinet Office press release stated it would “end poor value contracts such as those where Government departments and agencies paid between £350 and £2,000 for the same laptop”. The response to a government procurement freedom of information request on this matter by fullfact.org shows that while these prices actually represent totally different PC specifications, the proprietary operating system and office document software is identical in each case, with differences relating to microprocessors, displays, wireless modems and functionality such as fingerprint recognition accounting for the very large pricing disparity.
Uncertain scope, invalid distinctions
The proposed policy states that standards selection will be limited to software interoperability, data and document formats. The scope of these terms is unclear. And, in the next few years it will become even more difficult meaningfully to separate standardisation in these from other domains. The consultation’s terms of reference make the invalid assumption that software is distinct from hardware and that telecommunication is distinct from computing. Evidence weighs against these arguments with increasing technological convergence and other changes in ICT. Smartphones and tablets are becoming the dominant computing platforms in our personal lives and at work. Similarly, PCs have overtaken mainframe computers and revolutionised ICT usage since the 1980s. Communications is intrinsic to these new mobile devices and is increasingly integrated with most desktop PCs including web, and cloud-based usage where demarcations between software, hardware and service are submerged.
Video is becoming most prevalent. According to long-standing Cisco CEO, John Chambers, in a recent Bloomberg Business Week article, “Every device, five years from now, will be video. That’s how you’ll communicate with your kids, with work.” Switching video standard is nothing like the peripheral task of simply replacing or adapting the mains plug on a TV set. Interoperability standards for video compression and encoding are highly complex algorithms that are deeply and extensively embedded in the workings of core hardware and software. Around one third of Internet traffic is streaming video and mobile video traffic already exceeds 50%.Virtually all of that conforms to FRAND-based standards requiring patent licensing, including AVC/H.264 (MPEG 4 Part 10) with most widespread adoption.
The customer is always right
Standards requirements change with technological innovations and shifting user needs. It is very difficult for any centralized government administration to anticipate or react with the dynamics of ICT supply and demand. Competition among standards is highly beneficial. Market forces precipitate occasional revolutionary changes with new standards displacing old standards (e.g. HTML substitutes for videotext standards such as that used by Minitel) and continuous, incremental improvements to existing standards (e.g., HTML5 replaces previous versions of HTML). Changes in user preference and demand can be difficult to predict. For example, within a few years of the introduction of Apple’s iOS-based iPhone in 2007 and Google’s Android in 2008, former smartphone market leaders Nokia and RIM, each with its own operating system software, were completely up-ended. The highly innovative capabilities with the new software platforms and devices have succeeded because they are very different to and much better than what they have replaced.
Different government departments have diverse needs. Whereas interoperability among UK government departments is important, so is optimising interoperability and access by end users, commercial partners and international organisations. Defence requirements can preclude the most widespread propagation of interoperability and encryption standards. Maximising functionality, security and interoperability for patient records among health authorities will be compromised by imposing standards that are chosen to accommodate requirements in education.
From a user’s perspective, functionality and interoperability with other users trumps supply-side considerations including the number of prospective ICT suppliers and lowest price.
Upstream savings, downstream costs
While seeking to eliminate licensing fees, open source software and royalty-free standards do not ensure lower overall costs. On the contrary, there is significant evidence that open source is no cheaper than proprietary solutions, including total ICT lifecycle costs with project implementation and support. In many cases, total costs may also be lower with technical efficiencies and large economies of scale that arise from the implementation of popular royalty-charging standards. It is practically impossible to create some high-performance ICT standards without infringing any patents for which royalties might be demanded.
Patent fees on popular FRAND-based standards are typically modest. Patent pool administrator MPEG LA licenses 2,339 patents it deems essential to H.264 from 29 licensors to 1,112 licensees for a maximum per unit rate of $0.20. This covers the vast majority of patents declared as essential to the standard. With around 6 billion mobile phones in service worldwide, aggregate royalties are low enough for GSM phones to be sold at price points down to less than $20. However, these fees significantly enable technology companies with upstream business models. They also allow vertically-integrated players to recoup some of their development costs from companies with downstream business models who make products but do not invest in developing the standards-based technologies. Eliminating the possibility of royalties merely forecloses upstream business models in favour of the downstream businesses, such as those that dominate government ICT spending, including hardware manufacturing, systems integration, technical support and outsourcing.
Open and competitive ICT markets allow the widest range of business models and licensing practices, including royalty free standards and open source software. There are many examples of open source software running on FRAND-based standards requiring royalty fees. For example, there are various proprietary and open source software codec implementations available for the H.264 video standard. It would be nonsense to bar this standard in favour of another standard that has only tiny adoption (the most fundamental barrier to interoperability among users), inferior or unproven performance including technical compliance and interoperability among implementations. And, in the case of video, for example, it would most likely infringe some of the very same patents used by the successful standard it would be replacing. So there is a significant possibility that patent fees would be required despite wanting to wish them away. Developing a high-quality video codec standard is a formidable task drawing upon lots of intellectual property. Designing around the best technologies to avoid royalty bearing technologies will result in inferior standards and implementations.
There is generally no conflict between open source licensing and paying patent royalties to third parties. In certain cases where there is conflict, this is the problem of the licensors’ making. The most stringent open source licenses; such as GNU GPLv3—in which “patents cannot be used to render the program non-free”—is seldom used because of such conflicts. In cases where licensing prohibits patent fees, the only legal solution is for such software to be written to ensure it does not infringe any IP that has not also been specifically declared royalty free by its owner.
Governance with selector selection
The Open Standards Board responsible for implementing the policy will face significant governance challenges in ensuring impartiality in its members and the standards selection processes they oversee. It will be difficult to recruit board members who have the required competence in ICT standards, and who as individuals, employees, or academics, are completely free of any interests in the outcome of any standards selections. Members will be affected by their other interests in specific companies, standards groups and business models.
International harmonisation and liberalisation
The European Commission’s approved guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union (TFEU) for horizontal co-operation agreements recognise the importance and value of standardization agreements.
“Standards which establish technical interoperability and compatibility often encourage competition on the merits between technologies from different companies and help prevent lock-in to one particular supplier.”These guidelines lay out a comprehensive approach for conformity of standardisation agreements with Article 101 TFEU, creating a “safe harbour” while affording standard-setting organisations significant autonomy in setting policies for disclosure of IP and its licensing terms. FRAND licensing, with and without payment of royalties, is explicitly recognised. Licensing policies of many international ICT standards-setting organisations including IEEE, ETSI, ITU-T, CEN/CENELEC are consistent with these guidelines and the charging of patent fees on their standards. It would be a travesty to exclude their standards from government usage in the UK, even if this was only on the basis of attempting to do so for what the Cabinet Office delineates as software interoperability, data and document formats.
Tuesday, 1 May 2012
"... finds that intellectual property (IP)-intensive industries support at least 40 million jobs and contribute more than $5 trillion dollars to, or 34.8 percent of, U.S. gross domestic product (GDP).
While IP is used in virtually every segment of the U.S. economy, the report identifies the 75 industries that use patent, copyright, or trademark protections most extensively. These “IP-intensive industries” are the source – directly or indirectly – of 40 million jobs. That’s more than a quarter of all the jobs in this country. Some of the most IP-intensive industries include: Computer and peripheral equipment, audio and video equipment manufacturing, newspaper and book publishers, Pharmaceutical and medicines, Semiconductor and other electronic components, and the Medical equipment space".I always have some reservations about exercises, since the are always going to be problems in measurement. In the first place, if IP is construed in wide enough terms, there's scarcely a business of any size that isn't supported by it. Lists of customers and suppliers, licensed software, business and trading names are pretty well ubiquitous. There's also the question of causation: how many of the jobs in question are specifically related to the existence of IP, how many to the provision of a service or the supply of goods that would have generated employment even if it had been generic and devoid of IP protection?
You can access the full (62-page) report here.
Thanks are due to Chris Torrero for the link.