Governance -- Alone On the Sidelines

We're in the midst of change at every level of our civilization -- from how and what we use and consume, to how we purchase things, to the infrastructure we rely upon, to our culture, and even nature itself.* The only constant is change. Except for governance.

Streaming music, digital publications, smartphones, streaming movies and television, online banking -- these and other changes represent the fashion of ways to get things done.

Online, cellular, https, encryption, passwords, rooftop solar, GPS -- these and other changes represent how our infrastructure has changed.

Amazon, Netflix, TurboTax, Spotify, PayPal, Square -- these and other changes represent how commerce has changed.

Gay marriage, legalized marijuana, income inequality, terrorism -- these and other changes represent how our culture has changed.

Global warming, Zika virus, MRSA, pythons in the Everglades -- these and other changes represent how nature has changed.

But finding examples of how governance has changed is more difficult, especially when it comes to academic and non-profit governance. Based on many observations both at the national level and across academia and society publishing, it seems governance is sitting out this 20-year period in which everything else is changing and adapting.

At the societal level, governance has withdrawn, especially in the US at the federal and state levels. From government shutdowns to budgetary stalemates to funding cuts for important long-term spending initiatives (infrastructure, research and development), the crisis in government/governance is palpable. The current election cycle in the US is another troubling indication.

But the US is not alone, as austerity politics, threats to depart the EU by the UK, lax security in Belgium, scandals in Greece, a crumbling Autobahn in Germany, and other abdications of responsible governance exist in many places you expect to do this better.

At the level of universities and non-profits, governance bodies seem disconnected or lost. There are many reasons for this, including the fact that many of those involved in governance are highly insulated from the effects of their decisions, the honorary/ceremonial nature of governance and its service, and no sense of urgency or importance around their role.

It was only when I was recently giving a talk about change that this hit me squarely. Of all the layers of civilization in this model,* the only one I could not pin to direct and active change was governance. It's like governance is sitting out this decade or two.

The consequences of this are readily seen -- inadequate funding of people and projects; large stores of retained earnings without expenditure plans; self-protective and self-perpetuating organizations rather than transformative, responsive entities; and downstream negative effects on professionals and citizens, with inordinate negative effects on the youngest members of both groups.

There are efforts to wake governance up -- the campaign of Bernie Sanders, the push for the $15 minimum wage, efforts to control global warming, and initiatives to increase civil liberties and reduce civic dangers. But governance and government, both of which seem to have their eyes and ears covered, remain ineffectual and out of touch.

Until governance wakes up and actively engages with a rapidly changing world and civilization, we can expect more problems and more inadequacies. It is the layer of a changing world that is stuck in place, and one that needs a real push to get going again.

* This model is drawn from Stuart Brand's "The Clock of the Long Now" and its model of the moderating forces affecting civilizations.

The Pricing Challenge

Recently, a post resurfaced on the Scholarly Kitchen revisiting trends in library expenditures and journal prices. The findings show that prices have increased by only about 9% on a per-journal basis, while expenditures have tripled owing to the rapid increase in outputs -- which have essentially tripled themselves. These findings became clear once pricing data for digital licenses were used rather than traditional print journals prices.

While this is perhaps cause for praise as publishers have kept their journal prices under control in the digital age, it is also cause for concern. CPI during the same period as the pricing study increased by nearly 68%, which is a rough measure that suggests publishers haven't been able to keep their prices at the level that matches general inflation in the overall economy. Instead, they've lowered their margins while making up for the pricing weakness with volume and efficiencies, including a lot of outsourcing and offshoring.

Volume and efficiencies can only carry you so far. There are only so many scientists and studies, and there are only so many services you can negotiate down or eliminate. Already, some are complaining that publishers aren't providing enough service as new standards and expectations hit researchers.

Yet, nobody is putting much new money into the research economy, and certainly not through the libraries. Tuition increases continue to outpace declines in academic libraries' share of university budgets, but this also cannot go on forever. The tuition burden is sure to reach a breaking point, and stop if not slowed dramatically first. When this happens, library budgets, unless their downward trend in share of university spending is reversed, will shrink even more.

One gambit that seems beyond the realm of possibility is simply to introduce significant price increases. This has happened in the past, and while typically attended by public shaming and controversy, the pricing seems to largely stick. Commercial publishers and the large non-profits have been the most savvy in this regard, leaving the smaller non-profits -- which are rightly worried about their ability to navigate forward from here -- out in the cold for the most part. These more cautious organizations tend to benchmark against what libraries and similar organizations signal. Perhaps it's time for them to grit their teeth and take the pricing plunge. After all, on the basis of quality and desirability, their products often seem underpriced to begin with.

Whether pricing is incremental, daring, or even discounted, risk is the name of the game. Are you pushing too far? Leaving money on the table? Devaluing your reputation? Pricing is never easy, but there's a fair amount of experience to suggest that moving prices northward has fewer downsides -- if you have the stomach for it -- than other alternatives.

Of Rocks, Rivers, and Poor Illinois

There are two basic ways people think about money -- either it's a rock or a river.

Those who think money is a bunch of rocks tend to want to store up the rocks and won't tolerate a rock debt or deficit. After all, that means someone else has more rocks than you, or you owe them some of your rocks, and that can't be good.

Those who think about money as a river believe that the energy of the water flowing is what turns the wheels of commerce. Whether the money flows from here to there, or there to here, only matters if your waterwheel is oriented a certain way. Accordingly, they tend to build wheels that work in either direction. They also know that water cycles back around, with precipitation and runoff both contributing. Water moves on its own, and there is an equilibrium if it's managed well.

I'm in the camp of the river people. Apparently, the governor of Illinois is in the camp of the rock people, and his state's higher education system is about to pay a stiff price for his hard-headedness.

The Democratic-led legislature and the state's new Republican governor are at loggerheads over the budget. The rock-oriented Republicans believe that deficits are bad, that every spending request must be balanced out by spending cuts, and so forth. The river-oriented Democrats don't worry about deficits as much because the flows of money are their focus, and spending is a flow.

This discrepancy in metaphors has led to dire consequences for state universities in Illinois:

  • Students are not receiving money from grants and scholarships
  • Faculty at all levels are being threatened with job cuts
  • Universities are not receiving state funds, which can be up to 1/3 of their budgets
  • Strong faculty are being wooed by universities outside of Illinois with job offers
  • Students are beginning to fill out paperwork to transfer outside of Illinois

The first three short-term issues are important, but the last two long-term issues may haunt Illinois for decades to come. Fiscal reputation is part of fiscal responsibility, and if a fiscal policy has as a consequence a high degree of unreliability, the damage can be enduring. Illinois already has a reputation for corruption (four of its last seven governors have ended up in prison). Now, it may be gaining a reputation for fiscal unpredictability.

Illinois is not the only state suffering from rock-headed budgeting. Kansas, Michigan, Louisiana, and Ohio are all suffering at the hands of budget warriors who believe that a state budget is like a household budget, and must balance. (It would be interesting to see whether their governors have credit cards, auto loans, and mortgages -- most household budgets run deficits of one kind or another pretty consistently.)

Debts and deficits have a long history in the fiscal thinking of the United States, one that many of these governors -- who I would venture fancy themselves of something approaching "originalists" when it comes to the Founding Fathers -- aren't aware of or have forgotten. Alexander Hamilton believed that debt was virtuous, as it aligned the interests of the two parties -- the one owing the money wanted to retain the goodwill of the lender and pay it back, while the lender wanted the debtor to do well and pay the money back.

Meanwhile, the rock-based budgeting is proving its inadequacies, as tax cuts (meant to leave more rocks on the market) have left state coffers barren. In Kansas, the budget deficit has ballooned, and other Republican-run states are seeing the same effects. These states are literally saving themselves poor.

Austerity thinking is the enemy of growth. Saving rocks for a rainy day only makes sense if you're a river thinker, and want that rain to flow through your economy to drive growth. It's time for governors and other leaders to stop thinking of money as something you stock. It's time to let it flow.

The Impact Factor Lives!

You don't have to look hard to find a scientist or an editor disparaging the impact factor.

Certainly, the impact factor is a measure of limited value -- a journal's ratio of citations in one year to its scholarly output over two years -- which is mostly relevant to editors and publishers, but also to librarians purchasing journals and authors selecting journals for submissions. It does not tell you how well a particular paper performed, or how important a particular researcher is. However, by showing us the ratio of citations to scholarly articles for the prior two-year period, it provides a manageable way to measure intellectual diffusion and uptake by a relevant audience -- other published researchers and academics. This number is then trended over time, and provides an interesting framework for measuring uptake and discussing quality.

It is a measure of a journal's punching power.

Over time, it has been extended to include five-year measures, citation graphs, Eigenfactor data, and so forth. But the core metric remains the source of consternation, possibly owing to its enduring power.

Some critics have said it is a bygone measurement because libraries, often purchasing "big deal" bundles, can't use it as a meaningful buying guide anymore. Others say it is moribund because it's so flawed -- in its mathematics, and because it come from a pre-networked computational era. Others point to academia's misappropriation and misuse of it as a reason journals should abandon it. (Interesting aside -- typically, none of these critics will offer an alternative.)

Some of the objections sound downright sophisticated. At a recent meeting, a few prominent academics took issue with it because "it's an average, not a median," and because "it suggests false precision by going to three decimal places." However, a less prosecutorial assessment might lead you to some insights rather than accusations.

The "three decimal places" complaint.
We have to start this discussion with the fact that what began as an idea quickly morphed into a commercial product, and one that has grown especially quickly in the past 20 years as information flows have increased. More information led to a desire to differentiate flows, one from another. A ranking system helps get this done. And, as in any business, choices are made that reinforce viability. Often, these commercial choices are virtuous. They cannot be dismissed simply because they are commercial choices. When managing a business based on a ranking system, these choices mostly revolve around making and maintaining a ranking system that works. 

In this rather sensible context, taking the impact factor to three decimal places makes perfect sense. Why? Imagine trying to sell a valuation scheme that creates a lot of ties in rankings. It's not viable. It doesn't solve the customer's problem -- telling one thing from another, telling which is better, even if the difference is slight or the initial ranking is later modified or trumped by other factors. And when you have thousands of journals, a measure with a few decimal places helps reduce the number of ties in the rankings.

The need for differentiation in a ranking system leads to more precision among the measures. Stock market changes are stated in percentages that go to two decimal places, making them effectively four-decimal-place numbers. The same goes for most web analytics packages, which have percentages out to two decimal places. Universities and most K-12 schools take GPAs out 2-3 decimal places. A great baseball batting average is 0.294 (three decimal places). Most sports' win percentages are pushed out to three decimal places.

The reason for all this precision is simple -- ties make ranking systems far less interesting, useful, and viable.

So it should be no surprise that this was part of the thinking in going out to three decimal places:

. . . reporting to 3 decimal places reduces the number of journals with the identical impact rank. However, it matters very little whether, for example, the impact of JAMA is quotes as 24.8 rather than 24.831.

This last statement was refined usefully in a later paper:

The last statement is inaccurate [quoting as above], and it will be shown . . . that it has a profound effect particularly at the lower frequencies on ordinal rankings by the impact factor, on which most journal evaluations are based.

In other words, avoiding ties helps smaller journals stand alone, and stand out. Is that such a bad thing?

It's not an "average," it's a ratio.
A more objective assessment of the mathematics might also help you avoid calling the impact factor an average (to be fair, ISI/TR describes it as an "average" in its main explanations, which doesn't help). Instead of an average, however, the impact factor is a ratio* -- the ratio of citations in one year to citable objects from the prior two years. It is not the average number of citations. It is not the average number of articles. It is not the average of two ratios. Those would be a different numbers. This is why when some people argue that it should be a median instead of an average, they have a flawed premise. 

Consider this -- the ratio of people over 30 to people under 30 in a group may be stated as 500:400 or 10:8 or 5:4 or 1.25. The number 1.25 only tells you the ratio between the ages. Similarly, an impact factor of 1.250 only tells you the ratio of citations to articles, no average or median included.

What about how skewed it is?
A corollary complaint can be that citations skew heavily to a few dominant papers, a skew which, it is sometimes argued, invalidates the metric. After all, the ratio is not predictive of what each paper will get. (Of course, to argue this, you first have to forget that this is not what the impact factor was designed to calculate -- it is not predictive for authors or papers specifically, but rather a journal-level metric). But would any system that skews to a few big events therefore be invalid?

Perhaps not. There are similar sources of skew in academia, many of which are celebrated. For instance, if a Nobel Prize winner teaches or conducts research at a university, that is often touted as a sign of the quality of that university. Will each professor or post-doc or student at that university achieve the same level of success and win the Nobel Prize? Certainly not. But that's not the point. What these facts illustrate is that the university has an environment capable of producing a Nobel Prize winner. For ambitious students and researchers, that's a strong signal that speaks to their aspirations. 

Even within a career, not every year is as good as every other, and one really good year can make a career. Hit it big in the lab, win a teaching award, publish a great paper, do some great field work, or write an insightful editorial, and a scientist might leap from an obscure university to a top school, a government appointment, or national celebrity status. Does the fact that the next few decades might be lackluster invalidate the notoriety and honors? Certainly not. The accomplishment suggests the levels this person can reach, and that is the source of reputation -- they can reach those levels, and may do so again.

The bottom line is that inferring a promise of future results from past performance in academia is part of how academia works -- it is a culture of reputation. For journals, impact factor is a reasonable and useful measure of reputation (as we'll see below).

The impact factor is not dead.
Even if you were to accept the arguments denigrating technical execution of the impact factor, journals should not abandon it, because it is not a dead metric. In fact, it's quite healthy.

Looking back at various tenures as publisher and advisor to publishers over my career so far, the impact factor has proven to be a responsive metric, reflecting editorial and publishing improvements. You fix things, and it responds. Editors compete harder for papers, get righteous about discerning cutting-edge from "me too" papers, appear more at conferences, twist arms, and so forth. The publishing house does a better job with media placements and awareness campaigns so that more people in the community learn about the new scientific and research findings. In a few years, the impact factor climbs. There is a cause-and-effect that strongly suggests that, from an editorial and publishing perspective, and therefore from a reader and purchasing perspective (and perhaps from an author perspective), the impact factor does a good job reflecting journal vibrancy and importance.

It's said by some critics that instead of looking at impact factor, works should be evaluated on their merits by experts qualified to do so, and everyone would agree with that. What these critics seem to forget is that the editorial practices that generally lead to improvements in impact factors are exactly what is desired -- expert editors and their expert editorial boards working harder and more aggressively to secure the best papers from the scientists doing the most interesting work. These are then evaluated, and a portion of them published. The papers are reviewed on their own merits by experts in the field. The journal is just doing the hard work of making the first-order selections.

Put forth a better editorial effort, and your impact factor generally increases.

Making the field more aware of the good science being published also drives impact factor legitimately. Citation begins with awareness. You can't cite what you don't know about, so using social and traditional media, meetings, SEO, and other ways to build awareness is an important publishing practice. Marry this with better papers that are more interesting and relevant, and you have a winning combination.

The impact factor seems to respond commensurately with these efforts. In some very competitive situations, where the editorial teams are evenly matched and equally competitive, you may only see a stalemate. But in fields where one journal takes the bit in its proverbial teeth while the others chew the proverbial grass, you can see a true performance difference within a fairly short amount of time.

If editors, libraries, readers, and authors had a measure that gave them a good way of quickly assessing the relative punching power of a journal they are considering -- that might show them which journals are headed up, which are headed down, and which are in a dead heat -- and this measure was fairly responsive to sound editorial and publishing practices, you'd suspect they'd want to use it. If it also made it easier to differentiate between smaller, lesser-known journals, that might also be good. And if it had a long track record that seemed to remain valid and provided good context, that might also be valuable.

Which is why it's very likely that, despite all the crepe being hung and predictions of its demise, given its responsiveness to solid editorial and publishing improvements and the signals to the larger library, author, and reader markets it provides, the impact factor . . . well, the impact factor lives . . . lives and breathes.

* Hat tip to BH for pointing out the ratio aspect.

The Branded House vs. the House of Brands

Brands are powerful signifiers of value. In a recent discussion, I was reminded of the success "the branded house" has achieved in the scientific and scholarly publishing world. A "branded house" exists where a singular brand is used across the majority of an organization's products. Nature provides a good example, with their 40+ journals in the Nature house.

There is a reason the branded house springs to mind more easily -- it is less diffuse than its cousin, the house of brands.

The house of brands might be most familiar if we go outside of our professional space and into the world of retail, where a house of brands like Procter & Gamble exists. P&G has 21 brands individually valued at more than $1 billion in annual revenues -- brands in the P&G house, like Pampers, Tide, Pantene, Gillette, Crest, Always, and Downy, are familiar to nearly everyone, yet the P&G name sits in the background, a quiet presence overseeing the house of brands.

In professional and scholarly publishing, McGraw-Hill represents a house of brands, with textbooks and information services that deploy the familiar red-and-white McGraw-Hill logo on their spines, but are better known by their domain-specific brands -- "Harrison's Internal Medicine" and their Access series.

The branding choice between the "branded house" and the "house of brands" is important. The "branded house" is easier to deliver, and more memorable, so may have a better ROI for smaller organizations addressing homogeneous audiences. The "house of brands" approach often develops over time as a history of mergers and acquisitions across multiple, large, disparate product sets and customer bases.

Whichever approach is taken, consistency and care make a big difference. Tend to your brands. Most firms underinvest in branding and brand management, yet this is an area with typically remarkable returns after ongoing investments.

House of brands, or branded house -- either way, your brand is your most valuable asset. Please treat accordingly.

The Dynamics of Funding and Paying

The recent news analysis of Sci-Hub by Kate Murphy in the New York Times provides an opportunity to discuss some important dynamics between the funding of research and paying for research reports and related materials.

In a one section of her analysis, Murphy wrote:

. . . Elsevier, like other journal publishers, pays nothing to acquire researchers’ studies. Moreover, publishers don’t pay for the volunteer peer reviewers or editors. But they charge those same researchers, reviewers and editors, not to mention the public, whose tax dollars most likely funded the study in the first place, to read the resulting articles.

The shift from "pay" to "fund" is an important conceptual shift in economic and financial terms, and a key dynamic in the scholarly publishing marketplace. 

Many editors are paid. Murphy simply gets this wrong. Publishers typically buy out the time of major academics in order to keep funding at the university in place so the editor can be paid. For example, if an editorial job is estimated to be a "20% time" position (requiring a few hundred hours per year), the publisher would pay the university 20% of the academic's salary. For larger and busier journals, editors are full-time, with full salaries, benefits, and so forth.

By saying that publishers charge "those same researchers, reviewers and editors, not to mention the public," Murphy misses a chance to actually bring some data and nuance to the discussion about funding and paying. Only a fraction of scientists publish in any given year (more precisely, only a fraction are funded to perform research). In many fields where practitioners and clinicians predominate, reading is the primary information modality. Therefore, our information economy has a core asymmetry -- a few funded researchers, and a broader audience of interested readers. This market reality is essentially why the subscription model still holds sway -- when a few want to reach many, having the larger group pay spreads the costs and lowers prices for each participant in the system. Open access (OA) publishing has a business model (Gold OA) predicated on the inverse, and there are legitimate concerns that if it were to become predominant, expenses would hit a few major players (major research universities, major funders, major governments) inordinately. The UK has already had a taste of this with the RCUK's efforts of a few years ago. Therefore, when it comes to funding and paying in the scholarly information ecosystem, a core asymmetry should be acknowledged, one which make a model of broadly shared costs more generally appealing.

Why don't journals pay their authors? Aside from clear concerns about motivations and further increases to the expenses within scholarly publishing, there don't seem to be any benefits to be had from doing so. One study of cash incentives based on publication in top-tier journals showed that while authors submitted more papers to top journals when given cash incentives, they published no more papers in those journals. The study also found that researchers receiving indirect incentives -- salary increases or career progression for published research, for example -- submitted more papers and had more published. A plausible explanation is that the indirect incentives led scientists to do better work, while the cash incentives encouraged scientists to submit papers in a way more akin to playing the lottery. Also, paying authors would only increase the overall costs (direct and administrative) of the system.

So why do researchers submit their papers without being paid directly to do so? There are two major incentives for researchers to publish: to get credit and to claim primacy. Only by publishing can scientists make their research their own, and gain primacy for it. Publishing is also an effective way to measure productivity and prevent shirking, as the economist Paula Stephan has noted.

We want funded research to lead to publications of findings. Therefore, the funding of research directly impacts what institutions and other purchasers pay in aggregate. Here, we get to a crucial relationship between funding and paying. Funding of research has tripled in the past 20 years when you compound the increases. This has led to a tripling of research outputs, which has led to three-times as many published papers. These related volume increases have led to a tripling of what many institutions pay to access. Yet, publisher prices have only increased 9% over that period vs. a 67% increase in the CPI. This is an average. In the UK, subscription prices have actually fallen quite a bit, adjusted for volume. To absorb this increase in volume, many journals publish more papers than ever. Many new journals have been launched so new research findings can be published. This all leads to higher overall expenditures (paying more because there is more to buy), despite prices (adjusting for volume) only increasing 9% over the past 20 years.

So, a more accurate statement would be:

Funding of science has tripled, generating three times as many papers, and this volume explains the increase in expenditures to access the bulk of the scientific literature. Publishers have actually been controlling their costs during this explosion of available research reports, and most of the increases in payments libraries and others are seeing can be explained by these volume increases, which is itself explained by funding increases.

It's a dynamic that we will continue to wrestle with, especially as we continue to fund more STEM education, encourage more children to pursue STEM careers, and push for a world with more scientific research.

Unfortunately, at the same time, library funding (which impacts institutions' ability to pay for all this additional research) is not being maintained. Even with triple the research funding (much of it going to universities) and 200-500% increases in tuition (what students and their families are paying), funding of libraries has been falling as a share of university budgets for three decades. This seems an abdication of economic responsibilities at the university level, yet one that is rarely called into question. Libraries are paying out of an eroding funding base, despite their institutions being generally better off than ever and the increasingly vital role of the libraries' scholarly collections.

Another important nuance is that scholarly publishers pay for the infrastructure that supports the communication of research results -- from online systems to standards like ORCID and the DOI, to initiatives like CHORUS and HINARI. Paying for infrastructure also means paying the cost of rejections, the work to publish the next papers, the hiring and training of new editors, the expansion of titles to support the growth of scientific outputs, the maintenance and migration of archives, and so forth.

Therefore, to say that a publisher "pays nothing" to acquire new papers is a misstatement. You'd never say that a newspaper pays nothing to cover a sporting event or a traffic accident. The same holds true in our field, as we pay the salaries, systems costs, and infrastructure costs to be ready to evaluate dozens of papers per day, including personnel costs for acquisition editors, scientific editors, and so forth. Competition for papers can be fierce, and creating the venues, systems, reputations, and processes to support scholarly research publication certainly costs money. Publishers take on these risks on behalf of researchers, paying for all of this and more, with no guarantees of success.

In summary, despite many profound shortcomings, the news analysis in the New York Times invites a discussion of some important facts, nuanced points, and interesting financial and economic realities:

As long as more scientific research is funded, the scientific community can expect to pay more for well-edited, carefully curated, independently vetted, and competitively placed research reports, no matter the underlying business model. We may slow the already low rate of price increases, but as long as more funding drives a higher volume of researchers with valid scientific findings seeking outlets that boost their career prospects, aggregate spending seems to track aggregate funding fairly reliably.

The Off Switch and Security

The March 14-20 issue of Bloomberg Businessweek has a section focusing on security issues, and a feature story on the same general theme. Overall, the articles made me much more likely to use the light switch tucked away on my smartphone -- the ability to switch into airplane mode and leave the network.

Moving most of our communications infrastructure online and making it digital has created an arms race on the security front, with hackers and malicious actors finding ways into systems around the world. Late last year, hackers knocked out power in Ukraine to about 80,000 residents for several hours. The outage might have lasted longer, but because the system is antiquated, authorities were able to reset the system by clicking circuit breakers back into place by hand.

This insight, generated by happenstance, has been occurring to security experts by design, as well -- that analog security safeguards should be a part of systems. The nuclear industry has learned this, building in analog failsafes (the rods that lower into the core if there is a general system failure, cooling it down). As one expert says:

You can't lie to analog equipment. You can't tell a valve that it's opened when it's closed. It's physics.

Adding analog failsafes represents an approach being called "defense in depth." Another expert explains the digital vulnerability and the need for these analog solutions:

Defense in depth means you have layers of protection. But digital, even when it claims to have multiple layers, is in a sense one layer. Penetrate that, and you could potentially no longer have another layer you need to penetrate.

You can see this lack of true layers in the case where the FBI wants Apple to crack open an iPhone used by terrorists. There is only one passcode to overcome, and it would take four programmers 6-10 hours each to bypass it. After that, the phone would be wide open, as would all the other iPhones, Apple contends. This thin layer is all that is protecting iPhone privacy worldwide. John Oliver has a tremendous segment on these issues.

As more things become connected -- pacemakers, insulin pumps, automobiles, mass transit controls, airplane control systems, prison door locks, home locks/thermostats/systems -- this single layer of security is stretched thinner, and there are more ways into it.

This leads to another story about the vulnerabilities in mobile payment systems, which have vulnerabilities of their own. One major issue is the existence of under-capitalized start-ups in the space, which leads me to one of my favorite quotes:

There's a lot of two engineers and a goat.

Some start-ups have been caught sending social security numbers in the open, and have been fined for it. The Federal Trade Commission is looking closely at these vendors and regulating them more strictly.

A third article focuses on yet another non-digital solution to dealing with security breaches -- human motivation. A cybersecurity startup called SquirrelWerkz is convinced that a good portion of security problems can be traced to competitors or rivals, and are not random. By performing real-world investigations on top of digital sleuthing, they claim to be able to put up defenses against the most likely sources of malfeasance, which is more effective than trying to keep the world at bay.

But the article that has me thinking of using airplane mode more often is called, "The Democratization of Surveillance." The article explores the world of the International Mobile Subscriber Identity (IMSI) catcher, a device (also known as a Stingray or Hailstorm) that fools your cell phone into thinking it's a cell tower, then uses that connection to grab information, monitor calls, and so forth. Your phone has no idea it's being fooled, and behaves normally.

IMSI catchers are falling in price, and their appeal within law enforcement makes it difficult for lawmakers and courts to decide how to handle the devices. In India, huge scandals have occurred in which politicians and lawmakers and celebrities were monitored for weeks on end, their call logs revealing sexual dalliances, dealmaking, and other nefarious behaviors. It's comparable to the Murdoch scandals of hacked voicemails, but much more pernicious as it's more easily done and there are fewer clear legal or technology protections.

In a small number of states in the US, police are no longer allowed to use Stingray-like devices without getting an explicit warrant. But the laws are not uniform. As the reporter at Businessweek writes:

Most local police departments, though, still aren't bound by [a Justice Department directive requiring explicit language in a warrant]. Neither are foreign governments, which are widely suspected of using IMSI catchers here (as we are no doubt doing elsewhere).

Now that prices have fallen to the $1,500 range for these devices, concerns are that they will soon drop so far that consumers will have routine, retail access to them. There's even speculation that your phone could download an app that would turn it into an IMSI catcher, so you could monitor your neighbors, kids, and spouse.

Of course, there's an emerging countermeasures industry, but this is again just another arms race, with shorter times to the next step as technology and skills both become more widespread.

In this environment, it's good to remember that your smartphone has a couple of analog options -- airplane mode and off. These may be the best security measures you can take, especially if traveling abroad.

Finding the "And"

In "Built to Last," the fading classic of 1990s management advice, and in the world of improvisation, which tried to gain some vestige of a toehold in the management advice space of today, there is a concept which remains pretty useful -- the notion of "and" instead of "but."

Framing alternatives as implicitly forcing trade-offs can be a subtle way to derail forward momentum while also seeming wise and prudent. But trade-offs are actually less common than believed. There are many venues, audiences, options, author groups, and business extensions that can co-exist harmoniously, if not actually synergistically.

For example, attend nearly any editorial board meeting, and preserving quality will be contrasted with adding titles to the brand, as if there were an implicit trade-off -- the flagship would need to donate vital fluids of some kind in order for offspring to prosper, weakening the flagship.

Experience runs strongly counter to this presumption. Nature's strong portfolio is one of the most prominent examples of a flagship spinning off journals that any other publisher might justifiably think of as flagships themselves. At JBJS, adding new journals and products did not hinder the flagship's ability to increase its impact factor and continue to serve as the leading research journal in the field. The same goes for portfolios at JAMA, Lancet, ACC, IEEE, ACS, and so on.

But that's not to say that extending a portfolio necessarily enhances the flagship, although the dynamics of organizations pursuing both strategies seem to provide a generally lift. In reality, it seems the two activities can be pursued in parallel, as the techniques around portfolio growth don't have too much overlap with perpetuating and enhancing a flagship journal.

The same goes increasingly for non-journal initiatives spearheaded by publishers. Compartmentalized and treated appropriately, these can flourish without exacting a toll on the flagship or journals portfolio. It's a management challenge, and not an insoluble one.

A key "and" to achieve for any organization, but especially for editors and publishers, is the union between quantitative and qualitative information. Scientifically trained editors rightfully seek quantitative information, but businesses often run on qualitative information with spot checks in quant land.

Bringing editors and editorial teams to the point of considering "and" rather than responding with "but" often means the publisher has to take the lead, with pledges of resources, plans of action, compelling customer insights, and clear revenue projections. Enthusiasm needs to be cultivated.

Growth is an "and" proposal -- we will be what we are now "and" these other things; we will work in our current markets "and" these new markets. This is why getting to "and" is so important. 

The Game of Risk

The idea that "we're all publishers now" seems to have receded, as it's become clearer than ever that what publishers do is assume and manage risk on behalf of authors and readers.

Aside from the publishers many of us immediately think of when the word is used, as well as traditional publishers we don't always think of immediately (e.g., music publishers), there are new publishers in our midst. WordPress, Facebook, Instagram, and Twitter are publishers in that they assume risks for their authors and readers. They allow authors to perceive themselves as "publishing" because they have very high rates of acceptance.

This game of risk publishers play is increasingly difficult, and the rules keep changing. The Internet changed the rules in ways we're still figuring out, as the rule book is something the players have to discover as they play the game.

As an added complexity, it's unclear who exactly is inventing, influencing, and implementing the new rules. Technology companies, funders, government agencies, and the public -- both actual and talismanic -- have influenced the risk game significantly, and are revising the rules explicitly and implicitly, with intention and accidentally.

So, be glad we're not all publishers now, because publishing is a complicated game to play, and many novice players would simply lose outright very quickly.

Opportunists enter the game from time to time, and what offends the invested players is how these opportunists sit at the table, sometimes trying to fit in by dressing and acting like the other players, but without serious intentions. Or, if their intentions are serious, these are not the same intentions as the other players', much as card counters arrive at blackjack tables determined to win, but not really to play. And there is no "house" monitoring the game. Again, the rules and their enforcement require self-policing.

One thing about the game is clear -- the winners are those who last the longest. In that regard, there are many current contenders for endurance, with publishing houses and societies having spent decades if not centuries playing this game of risk.

With so many sources of new risk, how is your organization faring? What is your risk profile? Do you understand the new rules of the game?

Moving Beyond an Era of Fraud

In the movie "The Big Short," Mark Baum, one of the few investors who shorted the housing market and benefited financially from its near-collapse, says the following:

We live in an era of fraud in America. Not just in banking, but in government, education, religion, food, even baseball. . . . What bothers me isn't that fraud is not nice. Or that fraud is mean. For fifteen thousand years, fraud and short sighted thinking have never, ever worked. Not once. Eventually you get caught, things go south. When the hell did we forget all that? I thought we were better than this, I really did.

The disillusionment is palpable, and something burdening the world of journals currently. We worry that we're part of this same era, this same problem, with retractions increasing in number, papers citing supernatural designers, and Ouroboros reproducibility arguments. Of course, the fraud in science is harmful in different ways, and science has more to lose perhaps than other professions with members who resort to fraud.

Incentives drive behavior, including committing fraud. So, it was surprising to see a story of cheating, fraud, and lying coming from the world of bridge in a recent issue of the New Yorker.

Apparently, no contest is immune to fraud.

Bridge has a long history of fraudsters, it turns out. A complicated game, there is a lot of pride in winning, and a lot of prestige, which certainly sounds familiar. With two sets of partners competing, communication between each twosome is forbidden, and elaborate safeguards have evolved in contract bridge especially -- dividers under the table to prevent foot signals, screens across the table to prevent hand signals, and so forth.

Yet, a new approach is alleged, this time by two young players who seemingly came out of nowhere and have won an inordinate number of top tournaments.

You can see their technique in a video on kottke.org.

Pulling back to this perspective on fraud in bridge may help shed light on why fraud exists in science -- it's because when humans are involved in any incentivized system, fraud inevitably occurs. The goal has to be to minimize the amount of fraud and its harm, as it's impossible to eliminate it entirely. 

Are we doing what we can to minimize the amount of fraud in scientific publishing and communication? Are we minimizing its harm? Or can we do more?

Upside-down Economics

There has been a disturbing theme underlying the economy since 2007 -- dynamics everyone thought worked reliably and were inviolable don't seem to work anymore. Housing and real estate were supposedly the bedrock of the middle-class economy, and were safe, boring investments. Lower interest rates were supposed to drive lending and business growth. Lower unemployment was presumed to drive consumer spending. Lower oil prices have traditionally spurred spending and helped move the stock market upward. Higher corporate profits have customarily driven reinvestment in infrastructure, new lines of business, and so forth.

Instead, we have macroeconomic puzzles -- lower interest rates, yet higher rates of savings; lower oil prices sending shudders through the equity markets; an unprecedented housing market collapse and a tepid recovery; lower unemployment and stagnant consumer spending; large corporate cash stores being used for stock buybacks, if they're being used at all.

There are many factors feeding into this puzzling set of circumstances, but skittishness seems to be the overall theme. Individual consumers are paying down debt and bolstering their savings; businesses aren't seen opportunities that fit their new, lower level of risk tolerance; and more opaque international markets (e.g., China) which themselves are looking skittish are causing equity traders to read too much into decreased oil revenues.

It's a strange new liquidity trap, but like those we've seen in Japan, for example, it is psychological. After all, money itself is a construct, and therefore it matters very much how we think about it -- when to spend it, how to get more, and when to sit pat.

The Federal Reserve is seeing some success stimulating inflation, which was at a 2.7% annual rate in January 2016. This is an important part of escaping a liquidity trap, as inflation drives prices, increases wages in the lower tiers of the workforce especially, and stimulates loans. This trend will likely continue, which will become a factor for pricing in the academic marketplace, as well. Cost-of-living increases are often pegged to inflation, and businesses benchmark profitability up from this measure.

Until the economy is growing at a healthy pace again -- with inflation being a good reflection of that pace -- the muddled results of the past decade may continue. With inflation, we can see once again which way is up, and only then will some of the main economic levers respond sensibly once again when pulled.

The Subtle Power of Branding

Branding is one of the more advanced forms of business voodoo. There are many approaches to it -- emotional, analytical, strategic, aesthetic. All of them matter, and a powerful brand combines these approaches and others in a compelling and memorable manner. Ignoring branding means ignoring a potentially high-ROI business element. Embracing branding doesn't mean success, but improves the odds of long-term success.

Strategic brand initiatives can occur in a couple of ways, either to express a set strategy or to help facilitate the discovery of a latent strategy. All organizations have a strategy working at some level, but often haven't made it explicit. In these unexamined cases, an effective set of strategies can blend with less effective tactics, making it hard to know which is which. A strategic branding approach can help leaders differentiate between strategic and non-strategic activities, and then consolidate success into a powerful new brand expression. Of course, when strategies are already settled and clear, branding moves along more quickly into aesthetic and emotional spaces.

Emotional branding generates some of the most fascinating work in the area. There are a number of approaches, and those invoking archetypes are pretty convincing. One favorite example is how GE, representing the "creator" archetype, had the slogan, "We bring good things to life." This slogan captures the "creator" in spades. When GE recently changed their slogan to, "Imagination at work," the "creator" archetype remained in play, but the creator in question moved from life-giving to idea-creating. A subtle restatement that remains true to the company's archetype, it gives the organization a less religious or patronizing feel, and leans toward personal achievement and creative thinking.

Aesthetics matter, as well. Verizon's recent shift away from its well-established italic logotype with huge, looming checkmark like an angry eyebrow to a simpler wordmark with a simple checkmark "ding" at the end is well-analyzed in a post written soon after the new logo appeared. What's interesting is how simple the new logo is, especially when viewed in the midst of the other major carriers' logos, giving the Verizon logo a strength and confidence it didn't possess before. Aesthetics are contextual, after all. In the competitive landscape, what is your brand saying compared to others your customers encounter all the time?

Some branding approaches are more analytical/structural -- a "branded house" approach often requires structured branding, with an umbrella brand and subsidiary brands. The strategy is clear in this case -- support the structure of the business, and make the brand extensible.

Publishers live in a different branding space. Journal publishers see their brands mostly resolved into specific products with their own value elements -- unique audiences, impact factors, and editorial approaches. In the books world, the branding of authorship or series continuations can be extremely powerful, and publisher branding is usually small and regal. Company branding is usually more on the b2b side, not the b2c side.

Branding can cut through the clutter, prepare the path to sales, attract and reassure partners and customers, and create a consistency that's vital for long-term success. Is your brand well-managed? Does it reflect your strategy?

Sci-Hub -- Two Sides, Both Shrink

There has been good coverage of the Sci-Hub controversy -- from an overview of their technical approach from Silverchair to media coverage to blog posts.

One thing that was recently pointed out in a blog post by Stewart Lyman is that Sci-Hub is entirely dependent on the scientific publishing economic ecosystem remaining largely intact. Because the site leverages credentials from institutional subscriptions, these access points are vital to the site's ongoing relevance (and, with a couple of million new articles being generated each year, this is not a trivial point).

As Lyman writes:

The rebel movement won’t gain much traction unless researchers at Yale, Stanford, Oxford, Pfizer, and Genentech, etc. begin to switch over to Sci-Hub, and that’s not going to happen. These organizations will block this behavior because, though they hate paying for overpriced journals (e.g. Harvard paid $3.5 million in 2012 for these), they will stand firm in support of intellectual property rights. They will not be part of the revolution.

While Lyman is certainly correct, I think there is an important nuance to add. I wrote about this recently, and Lyman's essay provides a good chance to expand on the point in an important manner.

In my earlier post, I talked about how piracy might drive consolidation among publishers, something that is already moving ahead at a steady clip, if not accelerating. Many specialty societies are signing on with the big publishers as competing on terms of operational and sales excellence in a global, technological economy seems less and less feasible for small organizations with each passing year.

However, what I missed in the first post is that piracy might also consolidate the purchasers in the academic publishing market.

While Lyman notes that big institutions aren't going to violate IP laws they themselves benefit from, many smaller purchasers may have fewer qualms or restrictions when it comes to using sites like Sci-Hub to access content. Individuals certainly will have fewer qualms or restrictions.

Already, technology has consolidated purchasers in academic publishing, from widespread individual and uncoordinated departmental purchasers to fewer institutional licenses, coordinated organizational purchasing, and consortial approaches to consolidate buying power.

Should piracy effects allow smaller purchasers to exit the market, pricing for those remaining will increase to compensate. As large, consolidated sellers meet large, consolidated buyers, pricing battles will become more common, more often public, and messier overall.

But Sci-Hub or its ilk can't kill paid access. It is what they're leveraging. If that ends, Sci-Hub ends. They succeed in their end game, and they fail. It's a paradox that may ultimately make this a tempest in a teapot when it comes to dire effects and the gutting of an industry.

However, with the extreme eliminated, it's likely that consolidation continues, unless a new technological card is dealt that changes the table, or legal recourse proves successful. But market dynamics are clear -- piracy drives consolidation, for both buyers and sellers.

Responsibility and Unpublished Research Results

We often hear the economic argument, "taxpayers paid for the research, so they deserve to see the results." This argument usually involves publishers.

But who is to blame if publishers never even see a manuscript to consider? Who is to blame if research is funded, patients are put at risk, and no outcomes are even recorded in a required government database?

A new study suggests that there's something important going on, with potentially two-thirds of clinical trial results in the US going unpublished and undocumented more than two years after the trials have concluded.

Speaking to the presumption that funded research results in published papers, the authors give us this quote:

While seemingly axiomatic that the results of clinical trials led by the faculty at leading academic institutions will undergo peer reviewed publication, our study found that 44% of such trials have not been published more than three, and up to seven, years after study completion.

In other words, can you believe that some of the studies from Dr. Prestigious didn't work?

But publication now differs from registration and from documentation, with databases like ClinicalTrials.gov in existence. Unfortunately, the authors of the paper fail in their discussions to draw a clear distinction between the two types of failure their study covers.

These two types of failure are different in vital ways. Submitting trial results into ClinicalTrials.gov is one type of failure, and one that has a different hurdle and different implications (reflecting different obligations) than getting published in a peer-reviewed journal.

It seems less excusable for data to not be submitted into ClinicalTrials.gov. After all, the only hurdle there is the work involved in reporting the results. However, in speaking with researchers, the ClinicalTrials.gov hurdle is formidable, as the interface and technical implementation of ClinicalTrials.gov makes it a major task to report results there. Imagine a manuscript submission system that's twice as cumbersome. The community seems increasingly disenchanted with making the effort, and there is no carrot and no stick to keep them using it.

Yet, compliance here should be 100%. Instead, it's far lower, with some major academic centers having less than 10% of their clinical trials results reported in ClinicalTrials.gov. For example, Stanford's compliance rate for reporting results in ClinicalTrials.gov was 7.6% between 2007 and 2010. This means 10 trials out of 131 were reported in ClinicalTrials.gov. Meanwhile, 49.6% (65/131) were published. Overall, publication rates were higher than rates of compliance with depositing results in ClinicalTrials.gov.

Maybe publishers have a better carrot . . .

In covering the study, a story on NPR elicited a comment that supports this hypothesis:

    I work at a contract research organization that has a large contract with NIH DAIT. We are required to report the clinical trial results to clinicaltrials.gov within one year of "last patient last visit." It is a challenging task but we have a process in place to accomplish this requirement. I don't think researchers deliberately try to hide findings. It takes experience to write acceptable endpoint descriptions, generate an xml file to report adverse events, and properly organize and format the results. When planning a clinical trial resources must be committed to publishing the results at the conclusion of the trial.

    This is a usability problem, pure and simple, yet one that is clearly depriving scientific researchers and patients of information they may want or need. Where is the outrage over this poor user-interface design? Its effect may be far graver than any subscription barrier when it comes to taxpayer access to study results.

    Then we have the data from the study under discussion here pertaining to the percentage of trials published in peer-reviewed journals. It's surprisingly low. But is this because researchers are too lazy to write up the results and submit them to journals? Or is it because the results underwhelmed?

    There is a trade-off between publication rates and reproducibility, as I discussed recently in a post here. More publications of lower-quality studies (poorly powered, not predictive, weak hypothesis, weak generalizability) means a lower rate of reproducibility. Perhaps the problem here isn't that a low rate of these studies are published -- it may be, instead, that too many unimpressive and unpromising studies are funded, started, and terminated after getting poor results.

    I once participated in a clinical trial that went nowhere. The side-effects of a biological agent were simply intolerable, so most participants dropped out, leaving the researchers with no publishable results. The side-effect was known, but what wasn't known was that patients would stop taking the medication because of the side-effect. So why report it in the literature? It added nothing to the knowledge base, except that a silly side-effect hurt compliance. This isn't big news.

    However, the study I participated in was preliminary, and little funding was squandered in learning what it taught. The authors of the paper discussed here counted papers, but did not calculate the amount of funding spent on trials without registered outcomes or published results. That would have been a more interesting number, and perhaps would have given us something better to chew on. After all, if most of the unpublished/undeposited studies were small, preliminary, and involved fewer patients and less funding, we might have a different potential explanation.

    Studies underperform or disappoint for a number of reasons, some bizarre, some pedestrian, some worth pursuing. Not having published results from most of these is probably not doing damage in the larger scheme of things. However, not submitting the data to ClinicalTrials.gov is another issue entirely, and one we need to address. The usability issues with ClinicalTrials.gov may be scuttling a good idea, slowly but surely. Researchers dislike the site, and the benefits of compliance are elusive.

    Whatever the cause, the discrepancy between publication and deposit is certainly worth contemplating.

    The Start-up Shootout

    Earlier this week, I was privileged to serve as one of three judges in the NFAIS session, "Start-up Shoot-out," in which four different young or start-up companies presented, were peppered with difficult questions from the judges and the audience, and then waited to hear who would win a free NFAIS webinar as the winner of the shoot-out.

    Loosely based on formats like ABC's "Shark Tank" and TechCrunch's "Startup Battlefield," the event was as fun and interesting as I'd hoped. Eric Swenson from Elsevier/SCOPUS did a marvelous job as our drill sergeant/moderator, keeping what could have been an unwieldy session tight and on-time. The other judges -- Chris Wink and James Phimister -- were excellent, bringing a nice style and great questions to the proceedings.

    Going in, we were encouraged to indulge a bit in the theatrical aspects of the motif, which involved being a little edgy, asking hard questions, not letting presenters or one another pontificate, and so forth. This helped make for a session that felt, by scholarly meeting standards, kind of bruising, but in what I thought was a good way. After all, these firms are vying for viability, so tough questions await them, whether we ask them or not.

    Volleying questions to the participants, who deserve eternal thanks for tolerating our "tough guy" approach, was interesting. We were told to keep the questions pointed and short. That helped. So, we rarely got a rambling answer, and had full permission to cut off rambling answers, which we had to do once or twice.

    There were some puzzling answers, as well. One I will never forget was the answer of "It's confidential" when one participant was asked how his product's business model worked. Given the motif, it was easy to not let that answer suffice. Since there are really only a few business models, this participant was interrogated again and again until we at least could sense a bit more of what was behind the screen. It did not engender confidence.

    Determining a winner was surprisingly difficult. There are many factors to weigh when recommending investments, even mock investments, and the current economic climate, the payback period, and other dimensions all factored in.

    Once we'd winnowed down the four entrants to the two strongest, it became a bit more of a coin toss. Ultimately, we unanimously felt that the earlier-stage company, which we felt an investor would get a bigger share of at a lower cost, with the strongest network effect potential and a viable freemium model, seemed like the better choice, but only if we were willing to accept a 3-5 year investment window and not a 1-3 term.

    Sessions like this are a great idea, and one I hope other meetings consider. To me, this format is an improvement on the "flash session" model. However, a great moderator is key to making it work. Kudos to NFAIS and Eric Swenson for pulling it off.

    How Healthy is Your Marketing Program?

    The publishing landscape is becoming increasingly crowded and competitive.  In this environment, a healthy marketing function is essential. Brands battle for authors. Societies strive to attract new members. Management seeks efficient spending and strong returns on investment. New digital and social media opportunities beckon. Are you ready?

    Just as a doctor considers different data points in diagnosing the health of a patient, many variables determine the effectiveness of a publisher’s marketing program. A full and impartial analysis can provide valuable insights to help publishers maximize their marketing spend. 

    To help, Caldera Publishing Solutions is pleased to announce a 30-day Marketing Effectiveness Assessment to assist publishers and their marketing teams.

    While many factors play into what makes one brand or product more successful than another, marketing is the nexus for these factors. To succeed, you need information on customer awareness, engagement, and experience to fully evaluate the power of your brand.  You also need to consider what you are measuring and how you are using the information you are gathering. Marketing efficiency and effectiveness – for site licensing, for individual members and subscribers, for social media marketing, SEO and SEM, and other factors – also needs to be assessed.

    Our Marketing Effectiveness Assessment will quickly, and with one flat fee, evaluate your:

    1. Brand Power—the perceived strength and consistency of your brand; strength of the value proposition presented in all communications
    2. Market Planning—is a formal planning process/cycle in place?  Is planning tactically focused, or strategic?
    3. Marketing Structure and Roles—what is marketing accountable for?  What skillsets are in-house and/or handled by outside vendors?
    4. Customer Experience—what is it like for a customer to engage with your brand via your website, customer service, social media, and other channels?
    5. Customer Engagement--what is your marketing mix; how are you engaging your audience?
    6. Sales Enablement—what tools, messaging, and communications are you providing?
    7. Metrics and Analytics—what are you measuring and how are using these metrics?

    As part of this systematic evaluation of the individual components that drive marketing success, we will also make clear recommendations for improving your marketing programs, including efficiency of spend, brand positioning, and market engagement. 

    Improving your marketing program empowers your organization to identify and focus on areas of particular strength or weakness in its efforts to retain and expand its audience. 

    Contact us to learn more about this offering:  contact@caldera-pubishing.com.

     

    Could Piracy Accelerate Consolidation?

    The recent news that a researcher in Russia running a site called Sci-Hub has downloaded 48 million scholarly articles and is making them available free as a form of protest over the fact that some publishers charge US$32 to buy a single article is a good opportunity to pause and consider exactly who is being hurt in this scenario.

    From a financial and economic standpoint, collateral damage is entirely possible.

    In the mind of Alexandra Elbakyan, the Russian researcher behind Sci-Hub who is currently defying a US district court injunction, she and her ilk are mainly hurting large commercial publishers, organizations she thinks are exploiting academic information illegitimately.

    But is her approach likely to have that effect?

    First, how Sci-Hub works:

    The site works in two stages. First of all when you search for a paper, Sci-Hub tries to immediately download it from fellow pirate database LibGen. If that doesn't work, Sci-Hub is able to bypass journal paywalls thanks to a range of access keys that have been donated by anonymous academics (thank you, science spies). This means that Sci-Hub can instantly access any paper published by the big guys, including JSTOR, Springer, Sage, and Elsevier, and deliver it to you for free within seconds. The site then automatically sends a copy of that paper to LibGen, to help share the love.

    The article from Science Alert calls this system "ingenious," but that's flattering and naive. This kind of scheme is not new. As "ingenious" as an employee at a local hardware store who cuts an extra set of house keys, jots down the addresses, and uses these keys to enter homes around town whenever the owners are elsewhere? Or, in the old days, an "ingenious" employee would keep the carbons from credit card transactions and use these to make personal purchases? Or, more currently, an employee "ingeniously" swiping a credit card twice, once for the customer and once for themselves? The list goes on. Misused and misappropriated passwords aren't "ingenious" ideas, either.

    Complaining about the US$32 per-article pricing isn't new, either. Unfiltered complaints like this from journalists always signal that the reporter hasn't done her or his homework, and does not understand how publishing works, which is odd since it literally might pay for them to know. Again, there's nothing new here. Newspapers at the newsstand are much more expensive than home-delivery newspapers, for instance, because the newspaper publisher wants to encourage subscription, which is a better business model. So subscribers pay a lot less per-copy than ala carte purchasers. The same goes for academic publishers, but even more so -- that is, subscribers pay only a few cents per-article for subscription access (whether that's through an individual subscription or a site license), while the price for single articles is high to discourage ala carte usage. In addition, new solutions like article rentals and so forth have lowered pricing on the per-article front, and free access to developing economies is a long-standing practice among academic publishers.

    As so often seems to be the case, the problem isn't nearly what Sci-Hub wants everyone to believe. Research papers are more available, and available at a lower cost per-paper, than ever before, trends that are likely to continue.

    What's more alarming is that the organizations Sci-Hub's activities will hurt aren't the ones they're after, and the effects are likely to strengthen large commercial publishers. Bizarrely, some individuals working at organizations that would be hurt are apparently helping the pirates.

    Returning to the days of Napster, or more recently of early music streaming services, and you begin to realize we've seen this story before. Instead of dozens of record stores and outlets, we now have a handful of digital sales and streaming services. Consolidation was the result of a disrupted marketplace, instigated by piracy.

    But who has that hurt? In the case of music, the people most of us forgot about were the artists, who received no royalties at all for music downloaded illegally. Then there were the record stores, which used to exist, but no longer do as they were undercut by digital music piracy as a first blow, one from which they never recovered. What started as piracy ended up as a music economy that sells more songs than ever, but makes far less money from these sales, forcing artists into a more performance-oriented mode, reducing the number of mid-tier artists with viable careers, and putting technology companies and producers in far greater control of the music industry.

    Sci-Hub believes its actions are primarily humbling publishers like Elsevier, which are obviously the primary target. However, they are also hurting other participants in the academic publishing economy, some of whom are apparently aiding and abetting:

    Authors -- Books are also included in the materials Sci-Hub has purloined through access keys delivered by "science spies." When a publisher sells fewer books, royalties fall. While not often huge, these can be a nice supplement to academic pay. If allowed to persist, not only would royalties be lower, but advances would fall. For journal authors, where rewards for publication are indirect, data about their articles' impact and influence will likely be diminished, especially alt-metrics measures.  This is an interesting aspect of the new interlinked impact infrastructure -- piracy undercuts its functioning.

    Libraries who pay for site license access -- Despite the public shaming over US$32 articles and continued claims that institutions like Cornell and Harvard can't afford to purchase site licenses (despite multi-billion-dollar endowments, and the fact that Cornell raised more than $11 million to support its $8.8 million dollar library budget in 2015, and that Harvard recently saved $25 million by restructuring staff and eliminating duplications in its system), the reality is that libraries pay very low per-article usage rates for most titles. In any event, the main problem here is that libraries are paying while Sci-Hub is using their access keys and paid access to purloin articles. Both publishers and libraries have a mutual interest in accurate usage reporting. If Sci-Hub ever becomes a significant factor in access to articles, usage becomes inaccurate, and pricing inequities are more likely to emerge or be suspected, which will cause both parties to posture or make pricing adjustments in the blind, which could lead to irrational behavior. Meanwhile, Sci-Hub continues to bleed libraries using their own access keys.

    Society publishers and specialty societies -- Focusing solely on commercial publishers like Elsevier and SAGE, and you find dozens upon dozens of society journals within each publishing company. Sci-Hub is not ripping off articles from Elsevier or Wiley or SAGE, per se, but from the societies that use these commercial firms as publishing houses. Go beyond this, and you find the self-publishing non-profits with articles caught in Sci-Hub's scheme. In short, the majority of what is in Sci-Hub is most likely coming from non-profit societies, making this less of a story of Robin Hood robbing from the town's greedy sheriff, and more a story of Robin Hood stealing from the town's hospitals and charities.

    Universities -- Returning to the site licenses and libraries above, clearly academic centers are being hurt as an extension, as they have less to show for their expenditures, yet no decrease in demands from faculty and researchers that they maintain access to key titles. Add to this the institutional repositories universities invested in, which are now at risk of becoming even less viable.

    Funders -- With Gold OA now a decent segment of paid publishing, venues like Sci-Hub could be viewed as just another distribution outlet for articles already paid for. However, usage of these articles isn't documented in the normal fashion given Sci-Hub's spare infrastructure, so funders and Gold OA publishers have a new blind spot around the value they're actually deriving from their funding. Accountability decreases, uncertainty increases, and APCs will likely rise if publishers of all stripes have to adjust to pirates on the waters.

    Sci-Hub, and those "science spies" who are making Sci-Hub's piracy possible, are skewing the academic publishing economy in a way that will only hurt not just large commercial publishers but authors, libraries, charities and societies, universities, and funders as well.

    It's likely that the entities most vulnerable to machinations like those exhibited by Sci-Hub are the non-profit societies -- organizations with long histories of providing training, assistance, and career boosts for people like Alexandra Elbakyan, Sci-Hub's creator. In essence, the only people she's hurting are people like her, who will now have to pay more for articles (to offset the losses from her theft), more for society memberships, more for tuition, and so forth.

    Economies respond to piracy by charging more for those who do and will pay, or by letting entities scuttled by pirates sink into Davy Jone's locker. Piracy eliminates jobs, suppresses economies, and can, at its most extreme, bifurcate an economy as smaller entities are easily sunk while larger ones withstand the assault. With about 95% of academic publishers earning revenues of $25 million or less annually, most academic publishers need to preserve, if not build, revenues. That's difficult to do in the shadow of piracy.

    Bottom line: In academic publishing, with all the other forces pushing consolidation, you can add piracy to the list.

    Reproducibility Problems Run Deep

    The reproducibility crisis continues to provide intriguing insights into how to get science back on track.

    First flagged by some as a potential problem with peer review or an indictment of glamour journals, further explorations have found that the problems run much deeper than publishing and distribution outlets.

    At the recent PSP meeting in Washington, DC, a speaker from the Global Biological Standards Institute (GBSI) explained how 15-36% of cell lines used in biological research are not authenticated. This could be a major contributor, if not the major contributor, to the reproducibility crisis. Other factors he flagged include poor study design, poor data analysis and reporting, and problems with laboratory protocols.

    The existence of Eroom's Law (Moore's Law spelled backwards) is especially vexing, and points to fundamental problems that start well before any papers are written or submitted. Eroom's Law points to the approximate halving (between 1950 and 2010) in the number of new drug molecules approved by the FDA per billion dollars of inflation-adjusted R&D investment by the drug industry, despite huge gains in knowledge and brute-force research capacity (e.g., the ability to sequence genes or synthesize chemicals).

    In a recent paper from analysts specializing in this area, a set of profound and fundamental problems emanating from biomedical and pharmaceutical research is described:

    • Pursuit of animal models with low predictive value
    • Clinical conditions that aren't described specifically enough yet for targeted therapies to have addressable therapeutic targets, yet which are pursued nonetheless (e.g., Alzheimer's disease)
    • Ignoring "field observations" (i.e., physician reports) of what works, and pursuing reductionist predictive models instead
    • Following management's demand for more R&D throughput rather than ensuring predictive values are sufficient to better ensure success (quantity over quality)
    • Ignoring "domains of validity" for predictive models, and expanding or elaborating upon them inappropriately in a research project
    • Using terminology without rigor, creating confusion or misinterpretations in cross-discipline teams

    Journals exist to document and maintain the record of scientific achievements. When these achievements are underwhelming or fraught for whatever reason, the record will reflect this. These and other inquiries into the problem reiterate that the reproducibility crisis is a problem within science, which journals only reflect.

    However, as part of the academic and research establishment, journals do have a role in helping to turn things around. More statistical analysis, demanding more explanations of the predictive value of the experiments and the predictive models and their domains of validity can all help. This means spending more time with each paper, and emphasizing quality over quantity.

    As Derek Lowe wrote in his "In the Pipeline" blog at Science Translational Medicine:

    If you want better, more reproducible papers, you’re going to have fewer of them. Shorter publication lists, fewer journals, and especially fewer lower-tier journals. The number of papers that are generated now cannot be maintained under more reproducible conditions . . .

    Or, as the authors wrote in an important Science paper on reproducibility:

    Correlational tests suggest that replication was better predicted by the strength of the original evidence than by characteristics of the original and replication teams.

    In other words, better evidence is better evidence, and is more likely to be reproducible.

    Unfortunately, until the underlying cultural aspects that treasure quantity of publications over quality of publications are fundamentally addressed and changed -- and all the associated economic, financial, reputational, and career issues all players tacitly support -- we will continue to have problems reproducing weak science. Publishers can't solve these problems alone.

    High Yield

    In business, it's called "return on investment" or ROI. But with less jargon, what we mean is high-yield activities -- you put in some effort and get a lot for it.

    Business activities that are high-yield are critical to success. Low-yield activities in succession will only exhaust and demoralize an organization. A breakthrough, a breathtaking success, a long-term portfolio play -- all these things have "high yield" written all over them.

    Utilizing a blend of staff and consultants is key to having high-yield success. Even in the best organizations, staff can't bring all the skills and perspectives needed for success (and, cross-functional is not the same as diversified thinking, especially within the same organization).

    Yield is also relative. A small company that generates a new $1 million product may be ecstatic, while this same revenue achievement would be middling for a larger organization. And spending too much money and time to get to the $1 million lowers or erases yield.

    Long-term value also has to be factored in. If the revenue source resembles an annuity, the multiyear value can be significant. Imagine the yield around the journals acquired in distressed condition during the 1920s and 1930s, which now are multi-million-dollar entities. It took 30-40 years for trends to coalesce, but when they did, the societies with those properties were utterly transformed.

    Acquisitions can be high-yield. Product development can be high-yield. What is usually not high-yield is keeping money in investment instruments, especially currently. So look around. There may be a new way to set the table for future success, a new initiative that takes little effort to launch if done right but which could generate tremendous returns. What high-yield plans does your business have today?

    The Re-emergence of "Live," and Why It Matters

    Recently, "Grease: Live" aired on Fox to the highest ratings for a live musical broadcast since the form re-emerged with the "Sound of Music" in 2013. Other live musical broadcasts have also garnered high ratings, including "The Wiz" in 2015 and "Peter Pan" in 2014.

    Numerous competitive pressures and social trends have led to the re-emergence of the live event, including the practice of "live Tweeting" and network television's desire to recapture their ratings dominance by creating event-driven viewing. After all, watching a live musical on DVR-delay isn't nearly as visceral as watching it live, and some viewers are certainly looking for trainwrecks and missteps as much as anything (but to the television executive, trolls = audience). Social media's entanglement power makes live events more fun and interesting, as anyone who has attended a recent conference can attest.

    The appeal of live events also means new revenue potential.

    The shift to digital downloads and streaming for music has decreased music industry and artist revenues for studio recordings, even as the volume of music consumed has increased. Packaged album sales are largely a thing of the past (unless you're Adele or another marquee artist). Album sales let artists make money off 10-12 songs at a whack, rather than the 1-2 hits they might get revenues from in the single-track era. This has led to artists making more live appearances and giving more concerts to bolster their incomes. This is also feeding the re-emergence of live performance.

    For singers (and perhaps for their fans), there is a potential downside to the greater number of live performances -- an increasing rate of vocal fatigue and injury. Throat surgery is becoming more common for singers, and voices are fading faster than they did when solid record sales could let a band rest for months while making good money. Now, they sing for their dinners more and more, and that's wearing them out, altering the careers of some major talents.

    Reddit is another venue in which "live" has become a hotter commodity, with their Ask Me Anything (AMA) format. Webinars are more popular. Regional and local conferences and meet-ups are increasingly being utilized to extend the "live" experience. However, the challenges of exhaustion and over-extension also must be managed, as editors, authors, and other prominent ambassadors of brands are pulled into more and more settings and commitments.

    The challenges for purveyors of fixed or recorded media are multi-faceted. The Internet has made fixed media highly discoverable and shifted business models from packaged good (issues, albums) to ala carte sales (songs, articles). Expectations of "free" run rampant. The "live" approach creates a new package, this one temporally and physically based, allowing for the re-emergence of the packaged price or value exchange. But it is not without risk or responsibilities.