Swamped by Switching Costs

Most of the publishers in scientific and academic publishing can be considered small -- making less than $25 million in revenues annually. Because many of these are also non-profits, they have complex organizational forms that impose significant overheads in both direct costs and extended decision times.

When organizations were switching or adopting technology vendors every few years, it was possible to support these organizational overheads -- board members' travel plans, meetings and minutes, editorial retreats, and so forth. Every year, with new board members joining, new editors being trained, and new leadership rotating in at some level, a certain level of switching costs had to be maintained for the organization to function.

The move into the Digital Age has created another major overhead for these small organizations as vendor adoption and transitions now occur frequently. This introduces another set of switching costs that can be more difficult to support because they are often unknown at the outset, which usually translates into them being larger than expected.

The traditional switching costs aren't going away. In fact, they are increasing slightly and dovetailing with the newer switching costs, as boards and editors and leadership become more virtual and want to use technology more and more, and as some take on a more international aspect. Boards now want agendas for their tablets, Skype connections, and so forth. The administrative staffs have become small-scale technology contractors themselves.

At the organizational level, the demands for technology upgrades, adoption, and migration seem endless. New standards, customer expectations, buyer expectations, intriguing offerings, and new vendor practices are everywhere. A new DTD standard can mean a year of work and thousands of dollars in new expenses, only to see the new standard supplanted just as the dust is settling.

For larger organizations, the challenges are daunting enough, but for smaller organizations, mostly non-profits, the constant switching costs -- requiring consultants, staff time, IT resources, and management of multiple vendors -- are not being met with much in the way of new revenues. As a result, these organizations are treading waters that are becoming more turbulent with each passing year.

Such organizations are increasingly faced with some hard choices -- either pull back from the forefront of technology and truly bet on content; make some aggressive bets on technology in hopes that one hard push might tame the problem; or partner with a larger entity to take on the harder work of managing the technology, sales, and operational challenges of the Digital Age.

More non-profits are choosing this latter path, which may be wise. There is no winding back the clock. Also, the risks of this path are slight. It may feel shameful to give up some operational tasks, but the reality is that these operational tasks are usually not part of the mission. They are embedded now, but that is a recent trend. Printers used to provide editing, printing, and mailing services under one roof, leaving the editing as the source of independence. Is the digital equivalent really that difficult to contemplate? 

Playing current trends forward, it seems inevitable that there will be more small and non-profit publishers seeking shelter from the endless switching costs involved with the digital publishing. This is pragmatic, especially as technologies do tend to find more equilibrium over time. The pendulum may swing back to some new balance in the future, as technology becomes more of a commodity and online commerce becomes more normalized. Until then, heavy switching costs are not going away soon.

Economic Fantasies

Recently, a PhD economist who has since become a librarian produced an essay that probably won no fans in either professional tribe. In the essay, entitled, "Economic thought about 'gold' open access," the author (Jeffrey MacKie-Mason) ignores 15-20 years of evidence we have about how Gold OA works, who it benefits, and why this occurs.

This is clear from his opening claim:

Will gold OA further strengthen the monopoly scholarly publishing firms? No. In fact, it is likely the most realistic path towards reducing or eliminating their market power.

Evidence from the UK and elsewhere shows quite the opposite, with the large commercial publishers benefiting the most in terms of APC prices they can command and marketshare they can claim. In addition, the pricing caps and overheads Gold OA turmoil has placed on the institutional market for mid-tier non-profit publishers have forced a number of these into the waiting arms of the larger commercial publishers, creating an indirect benefit for these large commercial firms. Gold OA has improved the economic health of large commercial publishers, both through direct market benefits and via further consolidation of the mid-tier market under their auspices.

Industries do not become more diffuse with time and maturity -- they consolidate and become less diffuse. There is no reason to think that Gold OA, which is really just an approach to pre-payment not much different from the page charges of old, can work differently. More players in a market only make the market less efficient -- through duplication of efforts, transactional friction, and inefficient purchasing and sales practices.

MacKie-Mason also repeats disproven claims, such as authors imposing price competition on publishers. Authors are spending the company's money in most cases, and feel very little direct consequence if the price of an APC is US$3,500 or US$5,500. If APC price were a major author concern, eLife would have swept up all the papers in the Gold OA space almost immediately after debuting its no-cost publishing model, and PLOS' 11% APC price increase would not have been feasible.

But authors have non-monetary economic concerns that trump cost in most cases. The fact that MacKie-Mason misses the non-monetary economics of scholarly and scientific authorship probably leads to many of the mistaken arguments.

The rest of the essay is equally divorced from reality, with assertions like:

  • Will research-production-intensive institutions be made worse off? No.
  • Will gold OA hurt under-resourced institutions (such as those in the “global south”)? No.
  • Will flipping to gold OA take too long and cost too much? [Long rambling answer, but his major point is "No."]

Evidence shows that research-intensive organizations bear the brunt of Gold OA costs, and are not fans of the notion of subsidizing others; that under-resourced institutions can't afford many Gold OA payment levels, with many using waivers currently, which are not assured in a more Gold OA-intensive future; and the "flipping" to Gold OA has already taken 15 years, with the model still in the low percentages of the overall market by nearly any measure.

The real problems are clear -- university administrators are cutting the share of university budgets allocated to libraries; more papers are being published than ever, leading to more journals and larger existing journals that need to cover more costs; and digital publishing has not proven to be simpler or cheaper but more complex and more expensive than print. Realistic solutions to these problems are paramount.

Economic fantasies that ignore 15-20 years of evidence don't help move us forward.

How Much Disruption Can We Take?

As a culture, we've spent the last 20 years romanticizing the notion of "disruption" without adequately assessing the potential consequences. In the romanticized version, disruption is viewed as a positive Darwinian trend, a necessary clearing of the underbrush, a winnowing of the field as sluggish incumbents are swept aside by more nimble insurgents.

But what if "disruption" is occurring on a larger scale, involving major portions of the economy almost simultaneously? What if basic economic assumptions that guide governments and nations are themselves disrupted? What might be the human cost of such disruption? What institutions would need to change? How will those changes be adjudicated and implemented? Can the changes come fast enough?

Recently, we've seen some basic tenets of our economy turn upside-down -- housing was supposed to be a safe investment, yet it blew up in our faces; low interest rates are supposed to propel business investment and job growth, yet they haven't had this effect in many large economies -- Japan being the most noteworthy, Europe not being far behind, and the US exhibiting some of the same characteristics; low oil prices are supposed to lead to more consumer spending and a bullish stock market, yet recently the opposite has happened; high corporate profits are supposed to lead to more jobs and higher pay for employees, yet wages are stagnant and job growth, while incremental and steady in the US, is not at the levels you'd expect historically, and certainly not indicative of profits being spent with an eye toward growing them further.

There are clearly some underlying reasons for these strange trends, including lax regulatory oversight, weak executive and legislative leadership at the national level, tolerance of tax havens and other loopholes incentivizing the hoarding of wealth, and short-term corporate reward systems and attendant thinking.

But there may be another factor, as well.

A recent thought piece entitled, "Limits of Capitalism" by Albert Wenger posits that this upside-down world may actually be partially explained by the disruption of the digital economy as it comes home to roost. In fact, the lax regulatory approaches, bewildered national responses, and short-term cash-hoarding thinking of corporate leaders may be partially explained by an emerging economy that is poorly understood and viewed as highly volatile and risky.

And that is -- no matter how revolutionary your mindset -- not good news. Why? Because we haven't created alternatives, and because the digital economy is moving so fast that it might take over before we can learn to manage it.

Wenger's essay is a mixed bag, but raises some interesting aspects of the digital economy worth considering.

He begins by analyzing the lack of scarcity for digital goods, which leads to difficulty pricing them. Because marginal costs for digital goods approach zero, pricing them requires projecting demand against fixed or sunk costs. It also means dealing with piracy and non-payers, who can share in the benefits of a digital economy without paying for those benefits. This makes pricing more perilous, as pricing beyond a certain point invites non-paying abuse. But the lack of scarcity is the core problem, and there is no clear solution that capitalism, as an economic philosophy, naturally extends. This is an unsolved problem.

This leads directly to another unsolved, central problem of the digital economy: incredible uncertainty. Rare events are potentially more damaging in a networked world (just think of the rapid meltdown that occurred in 2007-08 when the lending system "broke the buck," or consider the "dark pool" trading centers that create advantages that are measured in milliseconds, with cases of algorithms destroying large tranches of wealth in the blink of an eye). Rather than having days for news to traverse human interactions, many things happen so quickly and, in some cases, automatically that it's hard to fathom the risk and reward that capitalism assumes can be evaluated by rational human actors working on human time scales. Black boxes can quickly breed black swans.

Then there is the rapid rate at which new knowledge or findings can change the game, adding to uncertainty and creating a vacuum for future planning. As Wenger writes, "There is no price right now for an immortality treatment. Or for quantum computing at scale. We do not have enough knowledge to do either. How much attention should humanity devote to these? There are no prices to guide that allocation."

While these are fanciful examples, a simpler example illustrates the point -- the likely emergence in the next 5-10 years of self-driving cars. Automakers, insurance companies, city planners, tax authorities, and technology providers will all be reeling from the widespread availability of self-driving cars -- parking space needs will shrink, insurance pools will collapse, tax bases will crater, and technology companies will move to dominate transportation in ways we've only tasted slightly so far. And it will probably happen quickly, as senior citizens, commuters, and teenagers immediately benefit from a rapid adoption of safer, more reliable, less demanding, and cheaper automotive technology.

What will a ride in an autonomous vehicle cost? More or less than Uber? (There is a compelling case to be made that Uber is unsustainable and full of hidden costs for its drivers, costs that autonomous companies will not be fooled into accepting, like depreciation.) Where will they park during big events? Will some people pay to have a priority in the fleet? Will some cars be dedicated to taking Aunt Mildred to the pharmacy no matter what?

The rapidity of a shift in automotive technology and its attendant economy leads to one of the most concerning aspects of the digital economy -- the emergence of power laws governing this economy. As Wenger writes:

. . . with digital technologies we are seeing a shift to power laws for many more situations. For instance, on Youtube the most watched video has been watched billions of times compared to the vast majority of videos which has been watched just a few times. Or in ecommerce, Amazon is an order of magnitude larger than most other retailers. The same goes for apps in the appstore. The leading apps have hundreds of millions (and some even billions) of users. But the vast majority of apps has just a few users. Digital technologies are driving these power laws because of network effects combined with zero marginal cost.

Google is positioning itself to dominate the self-driving market. Is there any doubt that they and perhaps one other firm (Tesla, possibly) will manage to do so? In the digital economy, there's a powerful trend toward power laws and duopolies.

Perhaps at this point it's worth introducing a problem Wenger illustrates nicely. When farming was a major economic driver, agrarianism was the focus of government policy and practice, and the farm lobby was powerful across nation-states. This endured even during the emergence of large capital markets, until the shift was clear. Then, we moved into capitalism, where capital -- its flows and preservation -- became the focus of governmental policies and practices. Agrarianism hung on for a time, and still has a powerful effect in some states that remain agrarian, but in general our government is now one focused on building and retaining capital.

As the technology economy emerges, there is likely going to be a continued reliance on capital and capitalism, with its assumptions and basic tenets used to guide economic policy and shape sources of power. It will quickly become outmoded, however, and is already showing signs of irrelevance. Amazon, Google, and Facebook seem to operate in a new economic zone -- digitalism, let's call it -- which scales differently and disrupts capitalism with dispatch. But we will be left wondering how to manage this economy -- politically, socially, economically. And that's disruption that will be both deep and wide.

Pipes, Faucets, and Net Neutrality

Net Neutrality is interesting to contemplate in light of initiatives like Facebook Instant Articles and Google AMP. The animating spirit of Net Neutrality was that everybody using the Internet should have equal access to its bandwidth and reach, so that companies like Verizon and Comcast couldn't throttle access in order to extort money from content providers for faster speeds.

Yet, here we are, with two dominant internet companies positioning themselves relative to content providers in about the same way Verizon and Comcast were planning to do so. There are no objections this time, no mass movements, no protests. In fact, it seems the changes are barely registering, even as they echo those attempted by equally large companies just a few years ago. Only recently have some taken notice, such as the head of Drupal and the head of WordPress.

Perhaps the main difference is between hardware providers and software providers. After all, Facebook and Google aren't looking to change the way the major pipes provide information. They're just regulating the faucets. It doesn't seem like a system-wide change, until you realize that Facebook and Google are the Delta and Kohler of the internet -- they make a lot of the faucets we use every day.

They've built a faster internet, and content companies are being asked to cater to these giants in order to use it.

It's not as if the software of Google and Facebook and others haven't been shaping the internet in important ways for years. Companies have been scrambling to keep up with the search engine algorithms at Google to remain discoverable, which in another context might be interpreted as a hidden tax on content providers imposed by Google's dominance. Facebook has been less intrusive until Instant Articles, but apparently saw an opportunity its dominance created for what it is -- a chance to create a walled garden of content for itself.

The software (faucet) providers have their commercial interests at heart, and are not providing faster load times and more reliable delivery out of the goodness of their hearts. There are plenty of provisos about what can and cannot be displayed in content and ads, making Facebook yet again a player in potential censorship of ideas and speech, a perilous position for a communications platform.

Aside from this, there is the precedent this sets. While we may have addressed the problem of internet service providers wanting to be paid by both providers and consumers (in the case of Facebook and Google, as mainly advertising businesses, they are paid because they collect consumers for advertisers), we have equally dominant businesses positioning themselves to make a two-tier internet, as well. The playbook may have some different vocabulary and tactics, but the goal is the same -- drive growth by creating a faster tier of internet service for those willing to pay for it.

Should we be concerned? It's not as if Google always lives up to its "don't be evil" mantra, having apparently colluded with Apple to stifle fair hiring in Silicon Valley a few years ago. Facebook's controversies have merited a long, well-interlinked Wikipedia page called, "Criticism of Facebook." Memorable controversies include its manipulation of experiences for a study, a nasty anti-Google campaign, and aggressive tax-avoidance strategies. Both companies have major lobbying campaigns in DC, and both are as sophisticated as any multi-billion-dollar firm.

Dominance is dominance, whether that's achieved through hardware or software, pipes or faucets. It seems that Facebook Instant Articles and Google AMP are two software providers' attempt to create a two-tier internet which will make it harder for new entrants to succeed, helpt the two companies to be even more dominant over content providers and their speech, and ultimately threaten the Open Web.

Or maybe Silicon Valley companies get a pass when it comes to breaking Net Neutrality with their business practices and software. After all, they have promised "don't be evil."

Contemplating the "Pay Wall" Metaphor

The rhetoric of access in academic publishing is full of metaphors (open, toll, barrier), including the notion of "access" itself, which is actually a metaphor for acquisition. "Open access" is stated less metaphorically as "free acquisition." After all, acquisition of content is not the same as content access. Literacy (three kinds here -- functional, domain-specific, and contextual -- that is, a physician from 1920 would be lost in a specialist medical journal article today) is a key component of access. An engineer acquiring a geology paper may find the content quite inaccessible on an intellectual level. "Access" is a metaphor for "acquisition" in this case, and not a metaphor for "comprehension" or "understanding." Yet often the metaphor of "access" is loosened to feel inclusive of comprehension or understanding, as this serves rhetorical purposes. For instance, "public access" is used as a metaphor for making research content free for non-specialists to acquire.

One metaphor that has persisted for years is that of the "pay wall," normally written as "paywall" but separated here for clarity. The "pay wall" is a metaphor worth contemplating, as exaggeration of the impassibility of payment systems via this metaphor has led to all sorts of strange behaviors, accusations, and social complaints.

The "pay wall" metaphor is almost unique to online content purveyance and normal commercial transactions. There are many other transaction points put in front of acquiring something which we don't refer to as a "pay wall":

  • a toll booth on a state road
  • a parking meter
  • a cover charge at a bar
  • a ticket to a movie
  • a shopping cart in an online store
  • a service charge for a concert ticket
  • a subscription requirement for Netflix or other streaming services
  • a postage stamp

None of these is considered a "pay wall." Examples of payment points after you seem to have acquired what you came for -- a checkout line at a supermarket, a bill at a restaurant -- aren't considered pay walls, either. In fact, like the others, avoiding these is usually viewed dimly, and some carry fines or threat of arrest.

Why these are considered normal transactions in a demand-driven economy with fiat currency and not "pay walls" remains elusive. The purveyor has taken the risk of putting together something of potential value, and then asks others to pay for the privilege of using these materials, services, or spaces. There was a decade during which the mantra "information wants to be free" was prevalent, and only some correction to embrace the entire quote drove this from the field of play.

Yet, when it comes to online content -- mostly newspapers, magazines, and academic journals -- we see the metaphor of a "wall" used to describe the transaction request. The metaphor is extended to include a "gate" at some point, and sometimes a "meter" if the gate adds counting as part of how it solves the "gating" activity within the "wall." But the "wall" aspect is so ingrained that even metered transaction systems are referred to as "soft pay walls." I imagine an inflatable wall, which makes for a very poor metaphor.

As others have pointed out, Gold OA itself does not do away with the notion of a transaction barrier, but merely moves this transaction barrier to the author and to the point of acceptance, rather than to the reader and at the point of content acquisition. Nevertheless, this transaction point is not itself referred to as a "wall" of any sort. It is given the more technocratic and obscure label of "APC." There is no metaphor at all, so the term is neutered rhetorically. It can't inflame anger or a sense of injustice as it is currently positioned.

The metaphor also fails often since most online content in newspapers, magazines, and journals is acquired through licenses or subscriptions. The "pay wall" is usually invoked when someone encounters content for which they or their employer have not established a license or subscription.

Metaphors work well when they inform, poorly when they mislead. The best metaphors allow you to anticipate how things might work based on what you know about the associations created by the metaphor. For example, calling someone "boiling mad" is a way to let others know that there is a simmering rage that might spill over if the person is pushed. You can anticipate, and plan accordingly. The same for "wooden faced," which lets you anticipate that someone is perhaps hiding their true feelings, and may require more consideration or care during interactions, as they don't show their reactions easily.

Calling transaction points for content "pay walls" isn't a helpful metaphor, as it suggests only a few alternatives -- bashing through the transaction point, scaling over the transaction point, or turning away from the transaction point. It suggests there is actually no clear passage through the transaction point, which may be why the metaphor has gained popularity among those who feel it is illicit for content companies to ask for payments. In academic publishing, this has been a particularly fraught area, as the notions of taxpayer-funding and moral rights make the metaphor of a "pay wall" inherently inflammatory to some.

Of course, a subscription or license remains a viable and often inexpensive alternative, but the emotional payoff of using the "pay wall" metaphor remains tempting to those seeking to make a point.

Publishers have not done an especially elegant job of constructing their transaction points. One mistake has been to put the per-article (or pay-per-view) charge front-and-center in many cases, rather than cheaper and attractively packaged subscription options. These clumsy transaction points come off as tone-deaf at best, arrogant at worst, and certainly can provoke a negative emotional response. It's like there's a simple turnstile with a person standing in your way, scowling and arms folded. The path should be clearer.

New approaches to content transaction points hold some promise. Metered paywalls (again, the metaphor persists) have many more elegant options than traditional binary transaction points, including softer messaging as the meter increments, better interaction models, and more sophisticated modals and interstitials that can be contextual and smart.

This may be the path forward -- breaking down the "pay wall" into a "pay path." There is plenty of evidence that if you provide people with a reasonable and pleasant way to pay for what they want, it works. The challenge for academic publishers is to adopt these practices quickly enough so that the metaphor changes sooner rather than later. We don't have time for more walls to be built.

Incentives and Micro-attribution

You have to hand it to academics -- they have strangely sophisticated imaginations, which tend to produce overly complicated dreams of how systems and people can or will work.

One of the newest I've heard is the idea of "micro-attribution" within papers. The idea is that if an individual contributed a table or analysis or paragraph, each contribution could be tagged in such a way that their individual contributions could be identifiable piecemeal and attributed accordingly.

There are some obvious problems with this, however.

  1. There's no incentive to adopt it, because everybody loses. Right now, if you're listed as an author on a paper, you get full credit (with the first author getting 110%, but that's another matter). Micro-attribution makes everyone involved with a paper a fractional contributor, meaning everyone involved will have contributed less than 100%. In other words, everyone loses some proportion of perceived or possible contribution. On an incentive level, why would academics embrace this?
  2. It can't differentiate between quantity vs. quality. We've all experienced this -- someone, with a suggestion, critique, or edit, quickly makes new insights possible, heads off disaster, or suggests powerful new avenues of inquiry. This may be a single word, a symbol ("!" or "?") scrawled in the margins of a review, or a calculation done right. Yet this person may contribute nothing more, and nothing really identifiable in the final paper. How do you count their contribution in a micro-attribution environment?
  3. It would be expensive to implement. Tagging articles and identifying who contributed what throughout submission and review and revision processes would be expensive, costing time, creating confusion, and requiring systems and tagging experts. In a publishing environment where any added cost is heavily scrutinized, there seems little chance of these expenses being widely adopted or embraced.
  4. It's unclear where it ends. As a former copyeditor and substantive editor, I know there are plenty of occasions when a copyeditor or line editor makes suggestions or clarifications or recalculations that can cause major revisions to a paper. Layout and graphical artists often contribute materials or improvements, as well. Statisticians catch errors or make authors beef up their analyses. Are all of these people now "micro-authors" or "micro-contributors"? What about reviewers who have a similar effect? Are they now contributors? (Even if they're anonymous?) What about a reader whose suggestion later causes a correction?

In short, micro-attribution seems to have upside-down incentives and payoffs. Nobody gets more from doing more work, and everyone sacrifices something. Where is the benefit of this again?

Puncturing the Legend of Sci-Hub

"The Legend of Sci-Hub" sounds like a mediocre video game for the Atari console, but is in fact an evolving story in scholarly and academic publishing. In some circles, Sci-Hub has seemingly gained the status of a cause or movement. According to this very recently formulated legend, Sci-Hub has the power to redistribute wealth, right social injustices, and cripple illicit industries built on the backs of hardworking taxpayers and academics.

Like most legends, it's based on comfortable falsehoods -- in this case, at least a half dozen.

Started a few years ago by a Russian neuroscientist, Sci-Hub recently leaped into the spotlight thanks to a lawsuit from Elsevier, a PR-savvy response from its founder, and lurking resentment of publishers in some circles. Major media coverage has veered from the naive and insipid to the professional and thoughtful. Industry responses have been all over the map, with the most notable aspect being how quiet the publishers themselves have been.

In the noisy part of the realm, there are a few key pieces of misinformation that need to be corrected:

Falsehood #1 -- Sci-Hub provides a superior user interface and user experience. If you've actually tried it, you know this is not true. The technology stack is unreliable, their use of Captcha is incredibly annoying, and the UI is cobbled together from Google Scholar and a few other available systems. Use it, and you'll likely encounter errors, recursive Captcha loops, or broken links. It is not a superior system or experience.

Falsehood #2 -- Sci-Hub provides a better way to find articles. Apparently the people who believe this have never tried entering a DOI into Google or Google Scholar. If you do that, you are brought immediately to the article you're looking for -- no Captcha required, no clumsy interface, no broken technology. Many publisher sites also support this use case. As for keyword or citation or author searches, I've found nothing superior to what you find in standard search engines. In fact, Sci-Hub strikes me as weak in this regard. Sci-Hub is mostly PDFs, so searchability is informed by PDFs.

Falsehood #3 -- Sci-Hub's mission is charitable, like Robin Hood's. First, most publishers have already adopted the role of Robin Hood by granting free access to low-income countries, participating in HINARI, PatientINFORM, and similar initiatives. Sci-Hub is indiscriminate in its approach. It is not "robbing from the rich to give to the poor." It is "robbing from the academic publishing economy in order to destroy it." That's not charitable; that's anarchist.

Falsehood #4 -- Sci-Hub is not stealing, but doing righteous work. Sci-Hub is stealing -- from institutions through phishing scams and other means; from non-profit publishers who can't afford the decreases in usage Sci-Hub portends; and from publishers run by and for scientists. There is nothing righteous here. There is no "higher good" because their approach defeats itself -- if Sci-Hub does manage to destroy the scholarly publishing economy, it offers no alternative but collapse. Sci-Hub is simply destructive, and not constructive. It has no future.

Falsehood #5 -- Publishers deserve this, because they are exploiting academia. This is perhaps the most fundamental lie. The fact is that publishers are an extension of academia, and a necessary one. Not only is publication necessary for researchers -- publishing research findings makes them theirs (rivalrous) and gives the researchers priority, so an independent system of publication priority and placement is necessary, no matter what we call the entities doing it -- it is necessary for research to move beyond commercial, proprietary, and power boundaries. Today, the organizations that do the work of making research public in an organized manner and maintaining much of the scientific record are what we call "publishers." Tomorrow, we might call them "selectors." Or "arbiters." No matter the label, academic centers and research facilities need the role to exist somewhere. Some of them do it themselves through independent presses. Overall, at a cost of 0.6% to the overall research endeavor, publishers aren't exactly taking an exploitative share, and the field is open for those who wish to provide the services more cheaply. But Sci-Hub again offers no vision of the future, only the short con of a thief.

Falsehood #6 -- Sci-Hub itself does not exploit taxpayers or academics. If you believe that publishers exploit taxpayers and academics (which are both a questionable beliefs, at best), then you have to accept that Sci-Hub also exploits taxpayers and academics. How? It also takes advantage of the so-called "unpaid labor" of academics and "taxpayer-funded research." However, it does this in order to undermine organizations -- non-profit publishers, commercial publishers, and academic libraries -- that hire people who pay taxes, many of whom are academics themselves. This has the potential to reduce revenues available to local and national governments slightly, increasing budgetary controls, which often fall on research budgets. Russia, where Sci-Hub's founder resides, spends less than 10% of what the US spends on scientific research each year, and is itself much more secretive about its research agenda and results than most Western societies, with increasing totalitarian attitudes adding to concerns. If publishers are hurt by Sci-Hub and have to lay off staff or close offices, a possible net effect is lower tax revenues, which could lead to slightly less research funding from the major Western governments supporting discovery or clinical research. The alternative is more taxes paid by non-publishers. Sci-Hub's piracy not only exploits taxpayers and academics, but it undermines taxpaying and academic career options (publishing, editing) while hurting government funding in the economies that support the lion's share of worldwide scientific research. It can be interpreted to some degree as another echo of Cold War political tensions, but is also, again, anarchist, not charitable or sustainable.

We continue to see emotions winning the day when issues of access arise. Empirical evidence of Sci-Hub's inadequate user interface and search facilities, economic implications, destructive purpose, and unnecessary role should not be ignored. Otherwise, we're back in a world of myth and legend.

Open or Closed?

We are seeing some interesting tribulations around "open" approaches to information sharing and knowledge exchange -- from the FBI/iPhone machinations to the naming of Boaty McBoatface, one aspect of the Digital Age is that authority's boundaries are clearly changing.

We may not have a clear sign about how the Apple/FBI case would have resolved, but there are strong indications that the US government would have prevailed. Since the phone was owned by the San Bernadino civic authorities, who wanted it opened, there was nobody but Apple standing it the way. It seemed like something the court would have made quick work of. After all, this was a case of terrorism in which 14 people died and many were injured. Apple's standing was based on a hypothetical they wrapped around their clear pecuniary interests in maintaining the reputation of security for their iPhones.

In an interesting interview with a Fred Kaplan, author of "Dark Territory: The Secret History of Cyberwar," on NPR's "Fresh Air," the Apple/FBI open/close option was split in an interesting way -- one which was a lose-lose for both Apple and the FBI. The split? Someone was able to find a technical solution to the iPhone's security features, allowing the FBI in. This clearly a loss for Apple, as their vaunted iPhone security is now known to be vulnerable. It is also a loss for the FBI, which wanted a legal ruling expanding their ability to search phones of all types. Both sides did not get what they wanted -- Apple did not get to keep its image as a technology business with impenetrable security features, and the FBI did not get to set a legal precedent that would have moved their investigations a mile ahead into the modern age.

Here, it's interesting to note, we have authority arguing that Apple should open a phone, and a technology company arguing to keep the phone closed. That's a bit of a flip from what we might expect. After all, information wants to be free, as the (selectively quoted) mantra goes, and technology companies are often cited in business circles as epitomizing openness -- open offices, open source, and open door policies. Monetary incentives have a way of calling open's bluff.

We are often ourselves conflicted about where the boundary of open and closed should be drawn, and where authority now resides. Perhaps this is because what is ours and what we can control are both less apparent and less accessible. Computers have become appliances -- you can no longer simply take them apart and swap components as easily as you used to, especially tablets and smartphones. Legal documents and terms of service are agreed to without being read, just with the seemingly innocuous click of "OK" or "Accept."

Because of the new opacity of computing, technology companies like Apple and Facebook and Google seem to have the means to redraw the lines between open and closed in ways we barely appreciate -- Is your location being tracked by all three via your smartphone right now? Did you open that window to your location? How do you close it?

Yet, in a fragmented media space in which we're conditioned to feel entitled to information of all sorts, it's almost offensive to have someone say we can't access information. So the large tech companies have a psychological advantage as well as economic and technological advantages -- their tools are seen as best when left closed, while the information that makes most of them interesting we want to make open. Meanwhile, journalism shrinks and pales, becomes an echo chamber, and seems more limited than ever. "Open" has "closed" many newspapers and media outlets.

Perhaps we need to move beyond these simple words, and take a hard look at the complex technological, economic, and civic realities of our times. We have new media barons. We have new centers of civic power. In this recent case, Apple's interaction with the FBI was a sign that not only can some companies be too big to fail -- some companies can be too big for their britches.

Making Something From Nothing

If you're the type of movie buff who watches "the making of . . ." bonus materials, you're probably also the type of person who is fascinated generally with how things are made. This healthy fascination has become more fashionable over the past few years, with Maker Faires and crafts events growing in popularity.

There has always been a justifiable interest in what it takes to make something from nothing.

Thinking about this can start with the realization that most things we take for granted and even treasure -- wine, basketball, movies, talk shows, music, granola, socks, publications -- do not occur in nature. Nearly every "something" we use in daily life once never existed at all. Even the bowl holding the fruit or the bag wrapped around the vegetables had to be made somewhere. How did these things come into being? Where and how were they made? How did it get here?

Almost our entire economy is predicated on this transformative practice of making something from nothing, yet how this happens is something we don't talk about nearly enough. The Industrial Revolution separated craft from consumption by concentrating craftspeople and automating major craft functions. Factories concealed the act of creation from the outside world, and specialization and automatic processes compartmentalized the act of creation from many of the people involved in the actual work.

International trade and the globalization of the economy has hidden the creative process even better. We don't see where the iPhone is made, or the flatscreen TV, or the laptop. They arrive in boxes on trucks from faraway lands. The only connection to the source is a tracking number.

The need to make something from nothing is a shared need for every organization -- nothing is finished, nothing is set, something is always in need of being created. Too often, however, because the creative process is so well-hidden in the modern world, it's sometimes hard for leaders to realize what it's possible to make from the raw materials they have at hand.

Let's assume there is a leader convinced that an organization needs transformation, needs to make something from nothing. What then?

If you read books and articles from experts in entrepreneurship and creativity, optimism and persistence are two traits often cited as critical to success in making something from nothing. Watch those "making of . . ." segments, and you see this. Film directors spending days pursuing the perfect light, composers spending weeks seeking the perfect chord and sound structures, or business people spending years refining their market and pricing plans.

The hit musical "Hamilton" didn't appear on Broadway all at once. It started off-off-Broadway, was revised and refined until it made it to off-Broadway, refined some more (staging, music, lyrics, pacing), and finally allowed to open on Broadway. By this time, it had been performed dozens of times in various stages of refinement. Underlying such painstaking processes is a certain optimism and resistance to setbacks that can look like patience but really is hope -- hope that a breakthrough is just around the corner, that potential sensed can become potential realized.

Remembering that a primary function of any organization is transformative -- not disruptive, not passive -- is vital to success. And having the patience and optimism and persistence to do the painstaking work of transforming nothing into something . . . having all that hope . . . well, that is still amazing.

 

Governance -- Alone On the Sidelines

We're in the midst of change at every level of our civilization -- from how and what we use and consume, to how we purchase things, to the infrastructure we rely upon, to our culture, and even nature itself.* The only constant is change. Except for governance.

Streaming music, digital publications, smartphones, streaming movies and television, online banking -- these and other changes represent the fashion of ways to get things done.

Online, cellular, https, encryption, passwords, rooftop solar, GPS -- these and other changes represent how our infrastructure has changed.

Amazon, Netflix, TurboTax, Spotify, PayPal, Square -- these and other changes represent how commerce has changed.

Gay marriage, legalized marijuana, income inequality, terrorism -- these and other changes represent how our culture has changed.

Global warming, Zika virus, MRSA, pythons in the Everglades -- these and other changes represent how nature has changed.

But finding examples of how governance has changed is more difficult, especially when it comes to academic and non-profit governance. Based on many observations both at the national level and across academia and society publishing, it seems governance is sitting out this 20-year period in which everything else is changing and adapting.

At the societal level, governance has withdrawn, especially in the US at the federal and state levels. From government shutdowns to budgetary stalemates to funding cuts for important long-term spending initiatives (infrastructure, research and development), the crisis in government/governance is palpable. The current election cycle in the US is another troubling indication.

But the US is not alone, as austerity politics, threats to depart the EU by the UK, lax security in Belgium, scandals in Greece, a crumbling Autobahn in Germany, and other abdications of responsible governance exist in many places you expect to do this better.

At the level of universities and non-profits, governance bodies seem disconnected or lost. There are many reasons for this, including the fact that many of those involved in governance are highly insulated from the effects of their decisions, the honorary/ceremonial nature of governance and its service, and no sense of urgency or importance around their role.

It was only when I was recently giving a talk about change that this hit me squarely. Of all the layers of civilization in this model,* the only one I could not pin to direct and active change was governance. It's like governance is sitting out this decade or two.

The consequences of this are readily seen -- inadequate funding of people and projects; large stores of retained earnings without expenditure plans; self-protective and self-perpetuating organizations rather than transformative, responsive entities; and downstream negative effects on professionals and citizens, with inordinate negative effects on the youngest members of both groups.

There are efforts to wake governance up -- the campaign of Bernie Sanders, the push for the $15 minimum wage, efforts to control global warming, and initiatives to increase civil liberties and reduce civic dangers. But governance and government, both of which seem to have their eyes and ears covered, remain ineffectual and out of touch.

Until governance wakes up and actively engages with a rapidly changing world and civilization, we can expect more problems and more inadequacies. It is the layer of a changing world that is stuck in place, and one that needs a real push to get going again.

* This model is drawn from Stuart Brand's "The Clock of the Long Now" and its model of the moderating forces affecting civilizations.

The Pricing Challenge

Recently, a post resurfaced on the Scholarly Kitchen revisiting trends in library expenditures and journal prices. The findings show that prices have increased by only about 9% on a per-journal basis, while expenditures have tripled owing to the rapid increase in outputs -- which have essentially tripled themselves. These findings became clear once pricing data for digital licenses were used rather than traditional print journals prices.

While this is perhaps cause for praise as publishers have kept their journal prices under control in the digital age, it is also cause for concern. CPI during the same period as the pricing study increased by nearly 68%, which is a rough measure that suggests publishers haven't been able to keep their prices at the level that matches general inflation in the overall economy. Instead, they've lowered their margins while making up for the pricing weakness with volume and efficiencies, including a lot of outsourcing and offshoring.

Volume and efficiencies can only carry you so far. There are only so many scientists and studies, and there are only so many services you can negotiate down or eliminate. Already, some are complaining that publishers aren't providing enough service as new standards and expectations hit researchers.

Yet, nobody is putting much new money into the research economy, and certainly not through the libraries. Tuition increases continue to outpace declines in academic libraries' share of university budgets, but this also cannot go on forever. The tuition burden is sure to reach a breaking point, and stop if not slowed dramatically first. When this happens, library budgets, unless their downward trend in share of university spending is reversed, will shrink even more.

One gambit that seems beyond the realm of possibility is simply to introduce significant price increases. This has happened in the past, and while typically attended by public shaming and controversy, the pricing seems to largely stick. Commercial publishers and the large non-profits have been the most savvy in this regard, leaving the smaller non-profits -- which are rightly worried about their ability to navigate forward from here -- out in the cold for the most part. These more cautious organizations tend to benchmark against what libraries and similar organizations signal. Perhaps it's time for them to grit their teeth and take the pricing plunge. After all, on the basis of quality and desirability, their products often seem underpriced to begin with.

Whether pricing is incremental, daring, or even discounted, risk is the name of the game. Are you pushing too far? Leaving money on the table? Devaluing your reputation? Pricing is never easy, but there's a fair amount of experience to suggest that moving prices northward has fewer downsides -- if you have the stomach for it -- than other alternatives.

Of Rocks, Rivers, and Poor Illinois

There are two basic ways people think about money -- either it's a rock or a river.

Those who think money is a bunch of rocks tend to want to store up the rocks and won't tolerate a rock debt or deficit. After all, that means someone else has more rocks than you, or you owe them some of your rocks, and that can't be good.

Those who think about money as a river believe that the energy of the water flowing is what turns the wheels of commerce. Whether the money flows from here to there, or there to here, only matters if your waterwheel is oriented a certain way. Accordingly, they tend to build wheels that work in either direction. They also know that water cycles back around, with precipitation and runoff both contributing. Water moves on its own, and there is an equilibrium if it's managed well.

I'm in the camp of the river people. Apparently, the governor of Illinois is in the camp of the rock people, and his state's higher education system is about to pay a stiff price for his hard-headedness.

The Democratic-led legislature and the state's new Republican governor are at loggerheads over the budget. The rock-oriented Republicans believe that deficits are bad, that every spending request must be balanced out by spending cuts, and so forth. The river-oriented Democrats don't worry about deficits as much because the flows of money are their focus, and spending is a flow.

This discrepancy in metaphors has led to dire consequences for state universities in Illinois:

  • Students are not receiving money from grants and scholarships
  • Faculty at all levels are being threatened with job cuts
  • Universities are not receiving state funds, which can be up to 1/3 of their budgets
  • Strong faculty are being wooed by universities outside of Illinois with job offers
  • Students are beginning to fill out paperwork to transfer outside of Illinois

The first three short-term issues are important, but the last two long-term issues may haunt Illinois for decades to come. Fiscal reputation is part of fiscal responsibility, and if a fiscal policy has as a consequence a high degree of unreliability, the damage can be enduring. Illinois already has a reputation for corruption (four of its last seven governors have ended up in prison). Now, it may be gaining a reputation for fiscal unpredictability.

Illinois is not the only state suffering from rock-headed budgeting. Kansas, Michigan, Louisiana, and Ohio are all suffering at the hands of budget warriors who believe that a state budget is like a household budget, and must balance. (It would be interesting to see whether their governors have credit cards, auto loans, and mortgages -- most household budgets run deficits of one kind or another pretty consistently.)

Debts and deficits have a long history in the fiscal thinking of the United States, one that many of these governors -- who I would venture fancy themselves of something approaching "originalists" when it comes to the Founding Fathers -- aren't aware of or have forgotten. Alexander Hamilton believed that debt was virtuous, as it aligned the interests of the two parties -- the one owing the money wanted to retain the goodwill of the lender and pay it back, while the lender wanted the debtor to do well and pay the money back.

Meanwhile, the rock-based budgeting is proving its inadequacies, as tax cuts (meant to leave more rocks on the market) have left state coffers barren. In Kansas, the budget deficit has ballooned, and other Republican-run states are seeing the same effects. These states are literally saving themselves poor.

Austerity thinking is the enemy of growth. Saving rocks for a rainy day only makes sense if you're a river thinker, and want that rain to flow through your economy to drive growth. It's time for governors and other leaders to stop thinking of money as something you stock. It's time to let it flow.

The Impact Factor Lives!

You don't have to look hard to find a scientist or an editor disparaging the impact factor.

Certainly, the impact factor is a measure of limited value -- a journal's ratio of citations in one year to its scholarly output over two years -- which is mostly relevant to editors and publishers, but also to librarians purchasing journals and authors selecting journals for submissions. It does not tell you how well a particular paper performed, or how important a particular researcher is. However, by showing us the ratio of citations to scholarly articles for the prior two-year period, it provides a manageable way to measure intellectual diffusion and uptake by a relevant audience -- other published researchers and academics. This number is then trended over time, and provides an interesting framework for measuring uptake and discussing quality.

It is a measure of a journal's punching power.

Over time, it has been extended to include five-year measures, citation graphs, Eigenfactor data, and so forth. But the core metric remains the source of consternation, possibly owing to its enduring power.

Some critics have said it is a bygone measurement because libraries, often purchasing "big deal" bundles, can't use it as a meaningful buying guide anymore. Others say it is moribund because it's so flawed -- in its mathematics, and because it come from a pre-networked computational era. Others point to academia's misappropriation and misuse of it as a reason journals should abandon it. (Interesting aside -- typically, none of these critics will offer an alternative.)

Some of the objections sound downright sophisticated. At a recent meeting, a few prominent academics took issue with it because "it's an average, not a median," and because "it suggests false precision by going to three decimal places." However, a less prosecutorial assessment might lead you to some insights rather than accusations.

The "three decimal places" complaint.
We have to start this discussion with the fact that what began as an idea quickly morphed into a commercial product, and one that has grown especially quickly in the past 20 years as information flows have increased. More information led to a desire to differentiate flows, one from another. A ranking system helps get this done. And, as in any business, choices are made that reinforce viability. Often, these commercial choices are virtuous. They cannot be dismissed simply because they are commercial choices. When managing a business based on a ranking system, these choices mostly revolve around making and maintaining a ranking system that works. 

In this rather sensible context, taking the impact factor to three decimal places makes perfect sense. Why? Imagine trying to sell a valuation scheme that creates a lot of ties in rankings. It's not viable. It doesn't solve the customer's problem -- telling one thing from another, telling which is better, even if the difference is slight or the initial ranking is later modified or trumped by other factors. And when you have thousands of journals, a measure with a few decimal places helps reduce the number of ties in the rankings.

The need for differentiation in a ranking system leads to more precision among the measures. Stock market changes are stated in percentages that go to two decimal places, making them effectively four-decimal-place numbers. The same goes for most web analytics packages, which have percentages out to two decimal places. Universities and most K-12 schools take GPAs out 2-3 decimal places. A great baseball batting average is 0.294 (three decimal places). Most sports' win percentages are pushed out to three decimal places.

The reason for all this precision is simple -- ties make ranking systems far less interesting, useful, and viable.

So it should be no surprise that this was part of the thinking in going out to three decimal places:

. . . reporting to 3 decimal places reduces the number of journals with the identical impact rank. However, it matters very little whether, for example, the impact of JAMA is quotes as 24.8 rather than 24.831.

This last statement was refined usefully in a later paper:

The last statement is inaccurate [quoting as above], and it will be shown . . . that it has a profound effect particularly at the lower frequencies on ordinal rankings by the impact factor, on which most journal evaluations are based.

In other words, avoiding ties helps smaller journals stand alone, and stand out. Is that such a bad thing?

It's not an "average," it's a ratio.
A more objective assessment of the mathematics might also help you avoid calling the impact factor an average (to be fair, ISI/TR describes it as an "average" in its main explanations, which doesn't help). Instead of an average, however, the impact factor is a ratio* -- the ratio of citations in one year to citable objects from the prior two years. It is not the average number of citations. It is not the average number of articles. It is not the average of two ratios. Those would be a different numbers. This is why when some people argue that it should be a median instead of an average, they have a flawed premise. 

Consider this -- the ratio of people over 30 to people under 30 in a group may be stated as 500:400 or 10:8 or 5:4 or 1.25. The number 1.25 only tells you the ratio between the ages. Similarly, an impact factor of 1.250 only tells you the ratio of citations to articles, no average or median included.

What about how skewed it is?
A corollary complaint can be that citations skew heavily to a few dominant papers, a skew which, it is sometimes argued, invalidates the metric. After all, the ratio is not predictive of what each paper will get. (Of course, to argue this, you first have to forget that this is not what the impact factor was designed to calculate -- it is not predictive for authors or papers specifically, but rather a journal-level metric). But would any system that skews to a few big events therefore be invalid?

Perhaps not. There are similar sources of skew in academia, many of which are celebrated. For instance, if a Nobel Prize winner teaches or conducts research at a university, that is often touted as a sign of the quality of that university. Will each professor or post-doc or student at that university achieve the same level of success and win the Nobel Prize? Certainly not. But that's not the point. What these facts illustrate is that the university has an environment capable of producing a Nobel Prize winner. For ambitious students and researchers, that's a strong signal that speaks to their aspirations. 

Even within a career, not every year is as good as every other, and one really good year can make a career. Hit it big in the lab, win a teaching award, publish a great paper, do some great field work, or write an insightful editorial, and a scientist might leap from an obscure university to a top school, a government appointment, or national celebrity status. Does the fact that the next few decades might be lackluster invalidate the notoriety and honors? Certainly not. The accomplishment suggests the levels this person can reach, and that is the source of reputation -- they can reach those levels, and may do so again.

The bottom line is that inferring a promise of future results from past performance in academia is part of how academia works -- it is a culture of reputation. For journals, impact factor is a reasonable and useful measure of reputation (as we'll see below).

The impact factor is not dead.
Even if you were to accept the arguments denigrating technical execution of the impact factor, journals should not abandon it, because it is not a dead metric. In fact, it's quite healthy.

Looking back at various tenures as publisher and advisor to publishers over my career so far, the impact factor has proven to be a responsive metric, reflecting editorial and publishing improvements. You fix things, and it responds. Editors compete harder for papers, get righteous about discerning cutting-edge from "me too" papers, appear more at conferences, twist arms, and so forth. The publishing house does a better job with media placements and awareness campaigns so that more people in the community learn about the new scientific and research findings. In a few years, the impact factor climbs. There is a cause-and-effect that strongly suggests that, from an editorial and publishing perspective, and therefore from a reader and purchasing perspective (and perhaps from an author perspective), the impact factor does a good job reflecting journal vibrancy and importance.

It's said by some critics that instead of looking at impact factor, works should be evaluated on their merits by experts qualified to do so, and everyone would agree with that. What these critics seem to forget is that the editorial practices that generally lead to improvements in impact factors are exactly what is desired -- expert editors and their expert editorial boards working harder and more aggressively to secure the best papers from the scientists doing the most interesting work. These are then evaluated, and a portion of them published. The papers are reviewed on their own merits by experts in the field. The journal is just doing the hard work of making the first-order selections.

Put forth a better editorial effort, and your impact factor generally increases.

Making the field more aware of the good science being published also drives impact factor legitimately. Citation begins with awareness. You can't cite what you don't know about, so using social and traditional media, meetings, SEO, and other ways to build awareness is an important publishing practice. Marry this with better papers that are more interesting and relevant, and you have a winning combination.

The impact factor seems to respond commensurately with these efforts. In some very competitive situations, where the editorial teams are evenly matched and equally competitive, you may only see a stalemate. But in fields where one journal takes the bit in its proverbial teeth while the others chew the proverbial grass, you can see a true performance difference within a fairly short amount of time.

If editors, libraries, readers, and authors had a measure that gave them a good way of quickly assessing the relative punching power of a journal they are considering -- that might show them which journals are headed up, which are headed down, and which are in a dead heat -- and this measure was fairly responsive to sound editorial and publishing practices, you'd suspect they'd want to use it. If it also made it easier to differentiate between smaller, lesser-known journals, that might also be good. And if it had a long track record that seemed to remain valid and provided good context, that might also be valuable.

Which is why it's very likely that, despite all the crepe being hung and predictions of its demise, given its responsiveness to solid editorial and publishing improvements and the signals to the larger library, author, and reader markets it provides, the impact factor . . . well, the impact factor lives . . . lives and breathes.

* Hat tip to BH for pointing out the ratio aspect.

The Branded House vs. the House of Brands

Brands are powerful signifiers of value. In a recent discussion, I was reminded of the success "the branded house" has achieved in the scientific and scholarly publishing world. A "branded house" exists where a singular brand is used across the majority of an organization's products. Nature provides a good example, with their 40+ journals in the Nature house.

There is a reason the branded house springs to mind more easily -- it is less diffuse than its cousin, the house of brands.

The house of brands might be most familiar if we go outside of our professional space and into the world of retail, where a house of brands like Procter & Gamble exists. P&G has 21 brands individually valued at more than $1 billion in annual revenues -- brands in the P&G house, like Pampers, Tide, Pantene, Gillette, Crest, Always, and Downy, are familiar to nearly everyone, yet the P&G name sits in the background, a quiet presence overseeing the house of brands.

In professional and scholarly publishing, McGraw-Hill represents a house of brands, with textbooks and information services that deploy the familiar red-and-white McGraw-Hill logo on their spines, but are better known by their domain-specific brands -- "Harrison's Internal Medicine" and their Access series.

The branding choice between the "branded house" and the "house of brands" is important. The "branded house" is easier to deliver, and more memorable, so may have a better ROI for smaller organizations addressing homogeneous audiences. The "house of brands" approach often develops over time as a history of mergers and acquisitions across multiple, large, disparate product sets and customer bases.

Whichever approach is taken, consistency and care make a big difference. Tend to your brands. Most firms underinvest in branding and brand management, yet this is an area with typically remarkable returns after ongoing investments.

House of brands, or branded house -- either way, your brand is your most valuable asset. Please treat accordingly.

The Dynamics of Funding and Paying

The recent news analysis of Sci-Hub by Kate Murphy in the New York Times provides an opportunity to discuss some important dynamics between the funding of research and paying for research reports and related materials.

In a one section of her analysis, Murphy wrote:

. . . Elsevier, like other journal publishers, pays nothing to acquire researchers’ studies. Moreover, publishers don’t pay for the volunteer peer reviewers or editors. But they charge those same researchers, reviewers and editors, not to mention the public, whose tax dollars most likely funded the study in the first place, to read the resulting articles.

The shift from "pay" to "fund" is an important conceptual shift in economic and financial terms, and a key dynamic in the scholarly publishing marketplace. 

Many editors are paid. Murphy simply gets this wrong. Publishers typically buy out the time of major academics in order to keep funding at the university in place so the editor can be paid. For example, if an editorial job is estimated to be a "20% time" position (requiring a few hundred hours per year), the publisher would pay the university 20% of the academic's salary. For larger and busier journals, editors are full-time, with full salaries, benefits, and so forth.

By saying that publishers charge "those same researchers, reviewers and editors, not to mention the public," Murphy misses a chance to actually bring some data and nuance to the discussion about funding and paying. Only a fraction of scientists publish in any given year (more precisely, only a fraction are funded to perform research). In many fields where practitioners and clinicians predominate, reading is the primary information modality. Therefore, our information economy has a core asymmetry -- a few funded researchers, and a broader audience of interested readers. This market reality is essentially why the subscription model still holds sway -- when a few want to reach many, having the larger group pay spreads the costs and lowers prices for each participant in the system. Open access (OA) publishing has a business model (Gold OA) predicated on the inverse, and there are legitimate concerns that if it were to become predominant, expenses would hit a few major players (major research universities, major funders, major governments) inordinately. The UK has already had a taste of this with the RCUK's efforts of a few years ago. Therefore, when it comes to funding and paying in the scholarly information ecosystem, a core asymmetry should be acknowledged, one which make a model of broadly shared costs more generally appealing.

Why don't journals pay their authors? Aside from clear concerns about motivations and further increases to the expenses within scholarly publishing, there don't seem to be any benefits to be had from doing so. One study of cash incentives based on publication in top-tier journals showed that while authors submitted more papers to top journals when given cash incentives, they published no more papers in those journals. The study also found that researchers receiving indirect incentives -- salary increases or career progression for published research, for example -- submitted more papers and had more published. A plausible explanation is that the indirect incentives led scientists to do better work, while the cash incentives encouraged scientists to submit papers in a way more akin to playing the lottery. Also, paying authors would only increase the overall costs (direct and administrative) of the system.

So why do researchers submit their papers without being paid directly to do so? There are two major incentives for researchers to publish: to get credit and to claim primacy. Only by publishing can scientists make their research their own, and gain primacy for it. Publishing is also an effective way to measure productivity and prevent shirking, as the economist Paula Stephan has noted.

We want funded research to lead to publications of findings. Therefore, the funding of research directly impacts what institutions and other purchasers pay in aggregate. Here, we get to a crucial relationship between funding and paying. Funding of research has tripled in the past 20 years when you compound the increases. This has led to a tripling of research outputs, which has led to three-times as many published papers. These related volume increases have led to a tripling of what many institutions pay to access. Yet, publisher prices have only increased 9% over that period vs. a 67% increase in the CPI. This is an average. In the UK, subscription prices have actually fallen quite a bit, adjusted for volume. To absorb this increase in volume, many journals publish more papers than ever. Many new journals have been launched so new research findings can be published. This all leads to higher overall expenditures (paying more because there is more to buy), despite prices (adjusting for volume) only increasing 9% over the past 20 years.

So, a more accurate statement would be:

Funding of science has tripled, generating three times as many papers, and this volume explains the increase in expenditures to access the bulk of the scientific literature. Publishers have actually been controlling their costs during this explosion of available research reports, and most of the increases in payments libraries and others are seeing can be explained by these volume increases, which is itself explained by funding increases.

It's a dynamic that we will continue to wrestle with, especially as we continue to fund more STEM education, encourage more children to pursue STEM careers, and push for a world with more scientific research.

Unfortunately, at the same time, library funding (which impacts institutions' ability to pay for all this additional research) is not being maintained. Even with triple the research funding (much of it going to universities) and 200-500% increases in tuition (what students and their families are paying), funding of libraries has been falling as a share of university budgets for three decades. This seems an abdication of economic responsibilities at the university level, yet one that is rarely called into question. Libraries are paying out of an eroding funding base, despite their institutions being generally better off than ever and the increasingly vital role of the libraries' scholarly collections.

Another important nuance is that scholarly publishers pay for the infrastructure that supports the communication of research results -- from online systems to standards like ORCID and the DOI, to initiatives like CHORUS and HINARI. Paying for infrastructure also means paying the cost of rejections, the work to publish the next papers, the hiring and training of new editors, the expansion of titles to support the growth of scientific outputs, the maintenance and migration of archives, and so forth.

Therefore, to say that a publisher "pays nothing" to acquire new papers is a misstatement. You'd never say that a newspaper pays nothing to cover a sporting event or a traffic accident. The same holds true in our field, as we pay the salaries, systems costs, and infrastructure costs to be ready to evaluate dozens of papers per day, including personnel costs for acquisition editors, scientific editors, and so forth. Competition for papers can be fierce, and creating the venues, systems, reputations, and processes to support scholarly research publication certainly costs money. Publishers take on these risks on behalf of researchers, paying for all of this and more, with no guarantees of success.

In summary, despite many profound shortcomings, the news analysis in the New York Times invites a discussion of some important facts, nuanced points, and interesting financial and economic realities:

As long as more scientific research is funded, the scientific community can expect to pay more for well-edited, carefully curated, independently vetted, and competitively placed research reports, no matter the underlying business model. We may slow the already low rate of price increases, but as long as more funding drives a higher volume of researchers with valid scientific findings seeking outlets that boost their career prospects, aggregate spending seems to track aggregate funding fairly reliably.

The Off Switch and Security

The March 14-20 issue of Bloomberg Businessweek has a section focusing on security issues, and a feature story on the same general theme. Overall, the articles made me much more likely to use the light switch tucked away on my smartphone -- the ability to switch into airplane mode and leave the network.

Moving most of our communications infrastructure online and making it digital has created an arms race on the security front, with hackers and malicious actors finding ways into systems around the world. Late last year, hackers knocked out power in Ukraine to about 80,000 residents for several hours. The outage might have lasted longer, but because the system is antiquated, authorities were able to reset the system by clicking circuit breakers back into place by hand.

This insight, generated by happenstance, has been occurring to security experts by design, as well -- that analog security safeguards should be a part of systems. The nuclear industry has learned this, building in analog failsafes (the rods that lower into the core if there is a general system failure, cooling it down). As one expert says:

You can't lie to analog equipment. You can't tell a valve that it's opened when it's closed. It's physics.

Adding analog failsafes represents an approach being called "defense in depth." Another expert explains the digital vulnerability and the need for these analog solutions:

Defense in depth means you have layers of protection. But digital, even when it claims to have multiple layers, is in a sense one layer. Penetrate that, and you could potentially no longer have another layer you need to penetrate.

You can see this lack of true layers in the case where the FBI wants Apple to crack open an iPhone used by terrorists. There is only one passcode to overcome, and it would take four programmers 6-10 hours each to bypass it. After that, the phone would be wide open, as would all the other iPhones, Apple contends. This thin layer is all that is protecting iPhone privacy worldwide. John Oliver has a tremendous segment on these issues.

As more things become connected -- pacemakers, insulin pumps, automobiles, mass transit controls, airplane control systems, prison door locks, home locks/thermostats/systems -- this single layer of security is stretched thinner, and there are more ways into it.

This leads to another story about the vulnerabilities in mobile payment systems, which have vulnerabilities of their own. One major issue is the existence of under-capitalized start-ups in the space, which leads me to one of my favorite quotes:

There's a lot of two engineers and a goat.

Some start-ups have been caught sending social security numbers in the open, and have been fined for it. The Federal Trade Commission is looking closely at these vendors and regulating them more strictly.

A third article focuses on yet another non-digital solution to dealing with security breaches -- human motivation. A cybersecurity startup called SquirrelWerkz is convinced that a good portion of security problems can be traced to competitors or rivals, and are not random. By performing real-world investigations on top of digital sleuthing, they claim to be able to put up defenses against the most likely sources of malfeasance, which is more effective than trying to keep the world at bay.

But the article that has me thinking of using airplane mode more often is called, "The Democratization of Surveillance." The article explores the world of the International Mobile Subscriber Identity (IMSI) catcher, a device (also known as a Stingray or Hailstorm) that fools your cell phone into thinking it's a cell tower, then uses that connection to grab information, monitor calls, and so forth. Your phone has no idea it's being fooled, and behaves normally.

IMSI catchers are falling in price, and their appeal within law enforcement makes it difficult for lawmakers and courts to decide how to handle the devices. In India, huge scandals have occurred in which politicians and lawmakers and celebrities were monitored for weeks on end, their call logs revealing sexual dalliances, dealmaking, and other nefarious behaviors. It's comparable to the Murdoch scandals of hacked voicemails, but much more pernicious as it's more easily done and there are fewer clear legal or technology protections.

In a small number of states in the US, police are no longer allowed to use Stingray-like devices without getting an explicit warrant. But the laws are not uniform. As the reporter at Businessweek writes:

Most local police departments, though, still aren't bound by [a Justice Department directive requiring explicit language in a warrant]. Neither are foreign governments, which are widely suspected of using IMSI catchers here (as we are no doubt doing elsewhere).

Now that prices have fallen to the $1,500 range for these devices, concerns are that they will soon drop so far that consumers will have routine, retail access to them. There's even speculation that your phone could download an app that would turn it into an IMSI catcher, so you could monitor your neighbors, kids, and spouse.

Of course, there's an emerging countermeasures industry, but this is again just another arms race, with shorter times to the next step as technology and skills both become more widespread.

In this environment, it's good to remember that your smartphone has a couple of analog options -- airplane mode and off. These may be the best security measures you can take, especially if traveling abroad.

Finding the "And"

In "Built to Last," the fading classic of 1990s management advice, and in the world of improvisation, which tried to gain some vestige of a toehold in the management advice space of today, there is a concept which remains pretty useful -- the notion of "and" instead of "but."

Framing alternatives as implicitly forcing trade-offs can be a subtle way to derail forward momentum while also seeming wise and prudent. But trade-offs are actually less common than believed. There are many venues, audiences, options, author groups, and business extensions that can co-exist harmoniously, if not actually synergistically.

For example, attend nearly any editorial board meeting, and preserving quality will be contrasted with adding titles to the brand, as if there were an implicit trade-off -- the flagship would need to donate vital fluids of some kind in order for offspring to prosper, weakening the flagship.

Experience runs strongly counter to this presumption. Nature's strong portfolio is one of the most prominent examples of a flagship spinning off journals that any other publisher might justifiably think of as flagships themselves. At JBJS, adding new journals and products did not hinder the flagship's ability to increase its impact factor and continue to serve as the leading research journal in the field. The same goes for portfolios at JAMA, Lancet, ACC, IEEE, ACS, and so on.

But that's not to say that extending a portfolio necessarily enhances the flagship, although the dynamics of organizations pursuing both strategies seem to provide a generally lift. In reality, it seems the two activities can be pursued in parallel, as the techniques around portfolio growth don't have too much overlap with perpetuating and enhancing a flagship journal.

The same goes increasingly for non-journal initiatives spearheaded by publishers. Compartmentalized and treated appropriately, these can flourish without exacting a toll on the flagship or journals portfolio. It's a management challenge, and not an insoluble one.

A key "and" to achieve for any organization, but especially for editors and publishers, is the union between quantitative and qualitative information. Scientifically trained editors rightfully seek quantitative information, but businesses often run on qualitative information with spot checks in quant land.

Bringing editors and editorial teams to the point of considering "and" rather than responding with "but" often means the publisher has to take the lead, with pledges of resources, plans of action, compelling customer insights, and clear revenue projections. Enthusiasm needs to be cultivated.

Growth is an "and" proposal -- we will be what we are now "and" these other things; we will work in our current markets "and" these new markets. This is why getting to "and" is so important. 

The Game of Risk

The idea that "we're all publishers now" seems to have receded, as it's become clearer than ever that what publishers do is assume and manage risk on behalf of authors and readers.

Aside from the publishers many of us immediately think of when the word is used, as well as traditional publishers we don't always think of immediately (e.g., music publishers), there are new publishers in our midst. WordPress, Facebook, Instagram, and Twitter are publishers in that they assume risks for their authors and readers. They allow authors to perceive themselves as "publishing" because they have very high rates of acceptance.

This game of risk publishers play is increasingly difficult, and the rules keep changing. The Internet changed the rules in ways we're still figuring out, as the rule book is something the players have to discover as they play the game.

As an added complexity, it's unclear who exactly is inventing, influencing, and implementing the new rules. Technology companies, funders, government agencies, and the public -- both actual and talismanic -- have influenced the risk game significantly, and are revising the rules explicitly and implicitly, with intention and accidentally.

So, be glad we're not all publishers now, because publishing is a complicated game to play, and many novice players would simply lose outright very quickly.

Opportunists enter the game from time to time, and what offends the invested players is how these opportunists sit at the table, sometimes trying to fit in by dressing and acting like the other players, but without serious intentions. Or, if their intentions are serious, these are not the same intentions as the other players', much as card counters arrive at blackjack tables determined to win, but not really to play. And there is no "house" monitoring the game. Again, the rules and their enforcement require self-policing.

One thing about the game is clear -- the winners are those who last the longest. In that regard, there are many current contenders for endurance, with publishing houses and societies having spent decades if not centuries playing this game of risk.

With so many sources of new risk, how is your organization faring? What is your risk profile? Do you understand the new rules of the game?

Moving Beyond an Era of Fraud

In the movie "The Big Short," Mark Baum, one of the few investors who shorted the housing market and benefited financially from its near-collapse, says the following:

We live in an era of fraud in America. Not just in banking, but in government, education, religion, food, even baseball. . . . What bothers me isn't that fraud is not nice. Or that fraud is mean. For fifteen thousand years, fraud and short sighted thinking have never, ever worked. Not once. Eventually you get caught, things go south. When the hell did we forget all that? I thought we were better than this, I really did.

The disillusionment is palpable, and something burdening the world of journals currently. We worry that we're part of this same era, this same problem, with retractions increasing in number, papers citing supernatural designers, and Ouroboros reproducibility arguments. Of course, the fraud in science is harmful in different ways, and science has more to lose perhaps than other professions with members who resort to fraud.

Incentives drive behavior, including committing fraud. So, it was surprising to see a story of cheating, fraud, and lying coming from the world of bridge in a recent issue of the New Yorker.

Apparently, no contest is immune to fraud.

Bridge has a long history of fraudsters, it turns out. A complicated game, there is a lot of pride in winning, and a lot of prestige, which certainly sounds familiar. With two sets of partners competing, communication between each twosome is forbidden, and elaborate safeguards have evolved in contract bridge especially -- dividers under the table to prevent foot signals, screens across the table to prevent hand signals, and so forth.

Yet, a new approach is alleged, this time by two young players who seemingly came out of nowhere and have won an inordinate number of top tournaments.

You can see their technique in a video on kottke.org.

Pulling back to this perspective on fraud in bridge may help shed light on why fraud exists in science -- it's because when humans are involved in any incentivized system, fraud inevitably occurs. The goal has to be to minimize the amount of fraud and its harm, as it's impossible to eliminate it entirely. 

Are we doing what we can to minimize the amount of fraud in scientific publishing and communication? Are we minimizing its harm? Or can we do more?

Upside-down Economics

There has been a disturbing theme underlying the economy since 2007 -- dynamics everyone thought worked reliably and were inviolable don't seem to work anymore. Housing and real estate were supposedly the bedrock of the middle-class economy, and were safe, boring investments. Lower interest rates were supposed to drive lending and business growth. Lower unemployment was presumed to drive consumer spending. Lower oil prices have traditionally spurred spending and helped move the stock market upward. Higher corporate profits have customarily driven reinvestment in infrastructure, new lines of business, and so forth.

Instead, we have macroeconomic puzzles -- lower interest rates, yet higher rates of savings; lower oil prices sending shudders through the equity markets; an unprecedented housing market collapse and a tepid recovery; lower unemployment and stagnant consumer spending; large corporate cash stores being used for stock buybacks, if they're being used at all.

There are many factors feeding into this puzzling set of circumstances, but skittishness seems to be the overall theme. Individual consumers are paying down debt and bolstering their savings; businesses aren't seen opportunities that fit their new, lower level of risk tolerance; and more opaque international markets (e.g., China) which themselves are looking skittish are causing equity traders to read too much into decreased oil revenues.

It's a strange new liquidity trap, but like those we've seen in Japan, for example, it is psychological. After all, money itself is a construct, and therefore it matters very much how we think about it -- when to spend it, how to get more, and when to sit pat.

The Federal Reserve is seeing some success stimulating inflation, which was at a 2.7% annual rate in January 2016. This is an important part of escaping a liquidity trap, as inflation drives prices, increases wages in the lower tiers of the workforce especially, and stimulates loans. This trend will likely continue, which will become a factor for pricing in the academic marketplace, as well. Cost-of-living increases are often pegged to inflation, and businesses benchmark profitability up from this measure.

Until the economy is growing at a healthy pace again -- with inflation being a good reflection of that pace -- the muddled results of the past decade may continue. With inflation, we can see once again which way is up, and only then will some of the main economic levers respond sensibly once again when pulled.