At Ambit, we spend a lot of time reading articles that cover a wide gamut of topics, including investment analysis, psychology, science, technology, philosophy, etc. We have been sharing our favourite reads with clients under our weekly ‘Ten Interesting Things’ product. Some of the most interesting topics covered in this week’s iteration are related to the ‘Myth of the miracle working CEO’, ‘the basics of information’ and ‘the disturbing truth behind the world’s most expensive coffee’.
Here are the ten most interesting pieces that we read this week, ended September 15, 2017.
1) Did dark matter kill the dinosaurs? [Source: newsweek.com]
Dark matter is an exotic form of matter that can only be detected by its gravitational pull on other objects—other than this, it is invisible. Most scientists believe that dark matter is made up of tiny, hard-to-detect particles called weakly interacting massive particles (or WIMPs). Amazingly, astronomers find that dark matter is five times more abundant than normal matter in the Universe. And now, new studies are suggesting that dark matter has actually affected the evolution of life on Earth.
In 1980, the science world was stunned when a team of researchers at Berkeley proposed that a massive meteor strike had wiped the dinosaurs and other fauna from the Earth 66 million years ago. Later, a giant impact crater of the same age was discovered buried under the Yucatan Peninsula. These discoveries forced scientists to consider that Earth was not isolated from its wider cosmic environment. Similarly, the great Permian extinction, 252 million years ago, destroyed up to 96% of existing species on land and in the sea. These numbers point to global environmental catastrophes as the cause of the mass extinctions, and only two geologic forces are thought to be capable of producing such global upheavals—the impact of large asteroids and comets, and episodic eruptions of massive floods of lava. Over the last three decades, some scientists have found a good correlation of mass extinctions with impacts and massive volcanism. Curiously they have also turned up evidence that these events occur in a cycle of about 26 to 30 million years. This attracted the interest of astrophysicists, and several astronomical theories were proposed in which cosmic cycles affected Earth and life on the planet.
One theory links Earthly events to the motion of the solar system as it moves through the galaxy. It seems that these geologic cycles may be a result of the interactions of our planet with mysterious dark matter. Most dark matter can be found as huge haloes surrounding the disc-shaped spiral galaxies, like our own Milky Way. But in 2015 physicist Lisa Randall at Harvard, proposed that significant dark matter is concentrated along the central mid-plane of the galactic disk. During the cyclic movement of the sun and planets through the galaxy, we pass through the mid-plane about once every 30 million years. At these times, the dark matter concentrated there tugs on the myriad Oort cloud comets found at the edge of the solar system. This gravitational perturbation causes some of the loosely bound comets to fall into the zone of the inner planets, where some would collide with Earth, producing a roughly 30 million year cycle of impacts and associated mass extinctions.
An even more dramatic event involves Earth passing through large dense clumps of dark matter as it moves through the galactic plane region. Several astrophysicists, including Nobel laureate Frank Wilczek, proposed that some of the dark matter can actually be captured by Earth. Moreover, the build-up of dark matter particles in Earth’s core leads to their eventual mutual annihilation. This releases large amounts of energy—up to a thousand times the normal amount of heat in Earth’s interior—periodically heating the inner Earth, and creating upward moving currents of hot, pliable rock. The result may be pulses of geologic activity—volcanism, plate tectonic movements, sea-level variations and climate changes-spaced about 30 million years apart.
2) The myth of the miracle working CEO [Source: Financial Times]
Can chief executives make a difference to the companies they run? Yes, well sometimes they do, albeit not quite in ways investors necessarily want or expect. For instance, for a decade after the financial crisis, Peter Crook, who was until last week the chief executive of Provident Financial, a British subprime lender, was seen by the company’s board as a superstar, smoothly piloting its affairs as the share price more than tripled. The remuneration committee paid royally to retain his services. Over the past five years, he earned £30m. Then Mr. Crook did something that really made a difference. Earlier this year, he reorganised Provident’s core home lending division, aiming to replace its army of old fashioned door-to-door, commission-based agents with a smaller number of full-time, technology-enabled employees. The intention was sensible: to reduce costs and boost recoveries.
In practice, however, the initiative achieved the opposite. Provident’s part-timers turned out not to want to become ear-chipped employees and defected, sometimes to competitors. The technology, unsurprisingly, experienced teething troubles. Two profit warnings followed in swift succession, and the second led to a scrapped dividend and a rout of the share price, which plummeted back to levels last seen in 2007, when Mr. Crook took over. Last week he left “with immediate effect”.
Several observations flow from this parable. One is that it explodes the picture of Mr. Crook as the unique author of Provident’s successes. For when he lost the services of some of its far less well remunerated (but actually crucial) front-line troops, it did not take long for the company to be dropped firmly in the soup. Similarly, how, one might ask, could the same individual be both the miracle-working boss worth all that performance-based pay and the bungler who mismanaged a restructuring so badly that he had to be shown the door? The answer, of course, is that Mr. Crook was neither. While his management qualities would have influenced Provident’s performance, the effects were much smaller than the remuneration committee recognised.
As with most CEOs, what most marked Mr. Crook’s tenure was, to a great extent, luck. He was a hero in the post-crisis years, when rivals, such as the high street banks vacated Provident’s markets, allowing the company to make superior returns. He became a villain when competition increased, forcing him to embark upon the doomed restructuring. It was not that Mr. Crook changed or that his management virtues became shortcomings. His luck simply ran out. The real truth is that none of these people is irreplaceable. The same vagaries of chance explain why a study of Fortune magazine’s “most admired companies” finds that, over two decades, companies with the lowest ratings go on to earn much higher stock market returns than those at the top. The stellar performance that drew praise at the outset was mainly due to luck and hence bound to diminish. It is a process statisticians refer to as reversion to the mean.
These observations kick away the principal plank supporting very high pay for top executives: the belief that they are vital to the success of the enterprise, and that their loss would spell doom for shareholders. It also damns the complex performance-based packages that allow bosses to reach vertiginous totals. There is little evidence that these serve a purpose other than self-enrichment. A recent Lancaster University Business School survey found that, over a decade, the correlation between performance and pay among FTSE 350 company bosses was negligible. Returns on capital barely budged, yet pay rose 80%.
3) How a polymath transformed our understanding of information [Source: aeon.co]
Claude Shannon – mathematician, American, jazz fanatic, juggling enthusiast – is the founder of information theory, and the architect of our digital world. It was Shannon’s paper ‘A Mathematical Theory of Communication’ (1948) that introduced the bit, an objective measure of how much information a message contains. It was Shannon who explained that every communications system – from telegraphs to television, and ultimately to the DNA to the internet – has the same basic structure. And it was Shannon who showed that any message could be compressed and transmitted via a binary code of 0s and 1s.
When Shannon joined Bell Labs they were facing a fundamental challenge for communication-at-a-distance because of ‘noise’- unintended fluctuations that could distort the quality of the signal at some point between the sender and receiver. Conventional wisdom held that transmitting information was like transmitting power, and so the best solution was essentially to shout more loudly. But some people at the Labs thought the solution lay elsewhere. A number of Bell Labs mathematicians and engineers turned from telegraphs and telephones to the more fundamental matter of the nature of information itself. They began to think about information as measuring a kind of freedom of choice, in which the content of a communication is tied to the range of what it excluded. In 1924, the Labs engineer Harry Nyquist used this line of reasoning to show how to increase the speed of telegraphy. Three years later, his colleague Ralph Hartley took those results to a higher level of abstraction, describing how sending any message amounts to making a selection from a pool of possible symbols. For instance in ‘Apples are red’, the first word eliminated other kinds of fruit and all other objects in general. The second directs attention to some property or condition of apples, and the third eliminates other possible colours.
On this view, the information value of a message depends in part on the range of alternatives that were killed off in its choosing. Symbols chosen from a larger vocabulary of options carry more information than symbols chosen from a smaller vocabulary, because the choice eliminates a greater number of alternatives. This means that the amount of information transmitted is essentially a function of three things: the size of the set of possible symbols, the number of symbols sent per second, and the length of the message. Enter Shannon’s 1948 paper that set out two big ideas. The first is that information is probabilistic. We should begin by grasping that information is a measure of the uncertainty we overcome. What determines this uncertainty is not just the size of the symbol vocabulary but also the odds that any given symbol will be chosen. In a simple coin toss for instance there are two choices with equal odds. Such a coin or ‘device with two stable positions’, stores one binary digit (or bit) of information. However, Shannon pointed out that most of our messages are not like fair coins. They are like weighted coins. A biased coin carries less than one bit of information, because the result of any flip is less surprising. He proved that human messages are more like weighted coins because the symbols we use aren’t chosen at random, but depend in probabilistic ways on what preceded them.
This is where language enters the picture as a key conceptual tool. We communicate with one another by making ourselves predictable, within certain limits. Shannon demonstrated this point in the paper by doing an informal experiment in ‘machine-generated text’, playing with probabilities to create something resembling the English language from scratch. He opened a book of random numbers, put his finger on one of the entries, and wrote down the corresponding character from a 27-symbol ‘alphabet’ (26 letters, plus a space): XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD. This, he called ‘zero-order approximation’. But in English this is not the case. He described the concepts of bigrams and trigrams which imply that certain characters are more likely to occur in combination of other characters – like ‘K’ is common after ‘C’, but almost impossible after ‘T’ Using these modifications he arrived at: IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE. But just as letters exert ‘pull’ on nearby letters, words exert ‘pull’ on nearby words. Doing the same exercise for words instead of characters then he arrived at: THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.
On comparing the beginning of the experiment with its end – ‘XFOML RXKHRJFFJUJ’ versus ‘ATTACK ON AN ENGLISH WRITER, we might be tempted to say that ‘ATTACK ON AN ENGLISH WRITER’ is the more informative of the two phrases. But it would be better to call it more meaningful. In fact, it is meaningful to English speakers precisely because each character is less surprising, that is, it carries less (Shannon) information. Shannon highlighted that in the real world symbols are more predictable and don’t convey new information or surprise. They are exactly where we expect them to be, given our familiarity with ‘words, idioms, clichés and grammar’. Given a phrase like A S-M-A-L-L O-B-L-O-N-G R-E-A-D-I-N-G L-A-M-P O-N T-H-E D’, we could relatively easily guess the next three letters as ESK. In fact, he concluded that up to 75% of written English text is redundant. Thus, to improve the quality of messages he suggested compressing messages so as to remove redundancy and leave in place the minimum number of symbols required to preserve the message’s essence. He did so by encoding our messages in a series of digital bits, each one represented by a 0 or 1. He showed that the speed with which we send messages depends not just on the kind of communication channel we use, but on the skill with which we encode our messages in bits. Moreover, he pointed the way toward some of those codes: those that take advantage of the probabilistic nature of information to represent the most common characters or symbols with the smallest number of bits.
4) Mobile phone wars create a data puzzle [Source: Financial Times]
America’s telecom companies have showered their subscribers with “free” minutes. That’s great news for consumers, but a headache for the Federal Reserve. These mobile phones may be contributing to a distortion of the inflation data, complicating the Fed’s decision about when or whether to raise interest rates. This goes to the heart of an issue haunting Fed officials: the question of whether the two key assumptions they have used for decades to predict inflation are now breaking down. One is that consumer prices will rise when a central bank expands the monetary base. The second is that inflation will rise when the economy is growing fast enough to cut unemployment, the so-called Philips curve.
These two assumptions would normally imply that inflation should be rising now. The Fed has expanded monetary policy dramatically since 2009. Unemployment has tumbled, amid steady economic growth. It now stands at 4.4%, well below the 4.7% threshold that the Fed considers to be the “sustainable” rate — meaning that labour shortages should be pushing up wages and prices. But that has not happened. Instead, the Fed’s favoured measure of core inflation (the core “personal consumption expenditure” index stripped of food and energy) has undershot its target rate of 2% for 59 months and in June it tumbled further to 1.4%. Some Fed officials blame this on short-term abnormal distortions. For example, John Williams, head of the San Francisco Fed, cites mobile phone price wars as one culprit; others point the finger at lower medical bills. But short-term issues clearly do not tell the whole tale. Inflation has been undershooting for a long time and prices are also weak elsewhere in the western world. Instead, most Fed officials suspect that structural factors are also at work. Demographics, for example, may play a role: older people consume less aggressively. The decline of unions and a shift to temporary, contingent work might have reduced the bargaining power of labour, undermining wage growth.
Add to this the bigger trend materialising around us: rapid digital innovation is expanding the productive capacity of our economic system in unexpected ways. This is changing price signals in a manner that economists and statisticians struggle to understand — or measure. Our statistical systems were developed for a 20th-century industrial world, where goods and services had tangible prices and consistent qualities. They can count goods and services from motor cars to massages well. But statisticians struggle to measure the impact of rapid product quality changes, such as when a $400 phone suddenly offers dramatically more services than a similarly priced one a year ago. The current statistical systems also fail to capture non-monetary transactions, such as the barter that takes place when consumers download “free” apps and use “free” cyber services in exchange for giving their data to technology companies for “free”.
These omissions now matter enormously. Not only does this digital activity represent a large and growing chunk of 21st-century economic activity, but it has consequences for price signals; digitisation seems to cut inflation. This has at least three implications. First, it suggests the central bankers need to overhaul the way they collect data, and persuade their governments to put resources into upgrading their statistical systems. Second, it implies that economists need to take some of the current data with a grain of salt. One issue puzzling central bankers is that the official statistics suggest productivity growth has only been 0.6% year on year since 2011; the average in the previous 40 years was 2%. This is puzzling, but might partly reflect mismeasurement. Third, the Fed might not need to be so alarmed by those falling personal consumption expenditure numbers. Yes, the behaviour of “real economy” prices might seem odd; but what is clear is that asset price inflation is surging. It is the latter which the Fed now needs to focus on, and curb with interest rate rises.
5) Why men don’t believe the data on gender bias in science [Source: wired.com]
Sex discrimination and harassment in tech, and in science more broadly, is a major reason why women leave the field. There has long been handwringing about why women are underrepresented in STEM (science, technology, engineering, and math), which has led to calls for increased mentoring, better family leave policies, and workshops designed to teach women how to negotiate like men. Last month, three senior researchers at the Salk Institute for Biological Studies in La Jolla filed lawsuits complaining of long-term gender discrimination; the complaints allege that women don’t have equal access to internal funding and promotions. These lawsuits highlight the real reason for the lack of women in science: Leaders in the field—men and sometimes women—simply don’t believe that women are as good at doing science. A vast literature of sociology research shows time after time, women in science are deemed to be inferior to men and are evaluated as less capable when performing similar or even identical work. This systemic devaluation of women results in an array of real consequences: shorter, less praise-worthy letters of recommendation; fewer research grants, awards, and invitations to speak at conferences; and lower citation rates for their research.
One early study evaluated postdoctoral fellowship applications in the biomedical sciences and found that the women had to be 2.5 times more productive than the men in order to be rated equally scientifically competent by the senior scientists evaluating their applications. The study finds that “gender discrimination of the magnitude we have observed… could entirely account for the lower success rate of female as compared with male researchers in attaining high academic rank.” A more recent study showed that science faculty at research-intensive universities were more likely to hire a male lab manager, mentor him, pay him more, and rate him as more competent than a female candidate with the exact same résumé. Another paper found that the faculty responded to emails from male prospective PhD students more than from female prospective students, showing that men have greater access to professors. These are just a few of the hundreds of peer-reviewed studies that clearly show, on average, the bar is set higher for women in science than for their male counterparts.
Given the enormous amount of data to support these findings, and given the field in question, one might think male scientists would use these outcomes to create a more level playing field. But a recent paper showed that in fact, male STEM faculty assessed the quality of real research that demonstrated bias against women in STEM as being low; instead the male faculty favoured fake research, designed for the purposes of the study in question, which purported to demonstrate that no such bias exists. Why do men in science devalue such research and the data it produces? If anyone should be willing to accept what the peer-reviewed research consistently shows and use it to correct the underlying assumptions, it should be scientists.
But it is in large part because they are scientists that they do not want to believe these studies. Scientists are supposed to be objective, able to evaluate data and results without being swayed by emotions or biases. This is a fundamental tenet of science. What this extensive literature shows is, in fact, scientists are people, subject to the same cultural norms and beliefs as the rest of society. The systemic sexism and racism on display every day in this country also exist within the confines of science. Even more pernicious, however, is the understanding that results from reading these studies- the realisation that those who have succeeded in science (and in many fields) have not done so entirely due to their own innate brilliance. Statistically speaking, just being male automatically gives you a leg up.
6) Why do we work so hard? [Source: The Economist]
The author of this piece recalls the working life of his father who had his own accounting firm in Raleigh, North Carolina and how he enjoyed his work. However, when his father was a boy on the family farm, the tasks he did in the fields – the jobs many people still do – were gruelling and thankless. Unlike those times, today’s professionals do work that entails co-operation with talented people while solving complex, interesting problems – which is fun. And he finds that we can devote surprising quantities of time to it. His question, however, is whether we should do so much of it. One of the facts of modern life is that a relatively small class of people works very long hours and earns good money for its efforts. Nearly a third of college-educated American men, for example, work more than 50 hours a week. Some professionals do twice that amount, and elite lawyers can easily work 70 hours a week almost every week of the year. Work, in this context, means active, billable labour. But in reality, it rarely stops. It follows us home on our smartphones, tugging at us during an evening out or in the middle of our children’s bedtime routines. It makes permanent use of valuable cognitive space, and chooses odd hours to pace through our thoughts, shoving aside whatever might have been there before.
When in 1930 John Maynard Keynes mused that a century hence society might be so rich that the hours worked by each person could be cut to ten or 15 a week, he was just extrapolating. The working week was shrinking fast. Average hours worked dropped from 60 at the turn of the century to 40 by the 1950s. The combination of extra time and money gave rise to an age of mass leisure. It was an era in which work was largely a means to an end – the working class had become a leisured class. As productivity rose across the rich world, hourly wages for typical workers kept rising and hours worked per week kept falling – to the mid-30s, by the 1970s. But then something went wrong. Less-skilled workers found themselves forced to accept ever-smaller pay rises to stay in work. The bargaining power of the typical blue-collar worker eroded as technology and globalisation gave companies a whole toolkit of ways to squeeze labour costs. At the same time, the welfare state ceased its expansion and began to retreat. The income gains that might have gone to workers that might have kept living standards rising even as hours fell flowed instead to those at the top of the income ladder. Willingly or unwillingly, those lower down the ladder worked fewer and fewer hours. Those at the top, meanwhile, worked longer and longer.
Technology and globalisation mean that an increasing number of good jobs are winner-take-most competitions. Banks and law firms amass extraordinary financial returns, directors and partners within those firms make colossal salaries, and the route to those coveted positions lies through years of round-the-clock work. The number of firms with global reach, and of tech start-ups that dominate a market niche, is limited. Securing a place near the top of the income spectrum in such a firm, and remaining in it, is a matter of constant struggle and competition. This relentless competition increases the need to earn high salaries, for as well-paid people cluster together they bid up the price of the resources for which they compete. The dollars and hours pile up as we aim for a good life that always stays just out of reach. In moments of exhaustion we imagine simpler lives in smaller towns with more hours free for family and hobbies and ourselves. The author believes that given this picture it’s obvious to conclude that overworked professionals are all miserable. On the contrary, they are not. He says that for his father’s generation work was a means to an end; it was something you did to earn the money to pay for the important things in life. Life was what happened outside work – with community.
As professional life has evolved over the past generation, it has become much more pleasant. Software and information technology have eliminated much of the drudgery of the workplace. He says, the pleasure lies partly in flow, in the process of losing oneself in a puzzle with a solution on which other people depend. The sense of purposeful immersion and exertion is the more appealing given the hands-on nature of the work: top professionals are the master craftsmen of the age, shaping high-quality, bespoke products from beginning to end. He says, the fact that our jobs now follow us around is not necessarily a bad thing, either. Workers in cognitively demanding fields, thinking their way through tricky challenges, have always done so at odd hours. Academics in the midst of important research, or admen cooking up a new creative campaign, have always turned over the big questions in their heads while showering in the morning or gardening on a weekend afternoon. If more people find their brains constantly and profitably engaged, so much the better.
Also, he says that the ‘simpler’ life away in the hills is not what it once was. He recollects how in his childhood, the neighbourhood was rich with social interaction. Those elements of life persist, of course, but they are somewhat diminished, as Robert Putnam, a social scientist, observed in 1995 in “Bowling Alone: America’s Declining Social Capital”. He described the shrivelling of civic institutions, which he blamed on many of the forces that coincided with, and contributed to, our changing relationship to work: the entry of women into the workforce; the rise of professional ghettoes; longer working hours. Our professional and personal lives are intertwined more than ever with social networks made up not just of neighbours and friends, but also of clients and colleagues. There is a psychic value to the intertwining of life and work as well as an economic one – the society of people like us reinforces our belief in what we do. Working effectively at a good job builds up our identity and esteem in the eyes of others. However this life has its impositions. It makes failure or error a more difficult, humiliating experience. Social life ceases to be a refuge from the indignities of work. The sincerity of relationships becomes questionable when people are friends of convenience.
7) The dangers of India’s Billionaire Raj [Source: Livemint]
James Crabtree, until recently the Financial Times’ man in Mumbai, has written a critique of the Indian’s economic model in the wake of a paper published by the celebrated French economist Thomas Piketty, along with his co-author Lucas Chancel. Piketty & Chancel show that the share of national income taken by the top 1% of Indian income earners is now at its highest level since records began, when the British Raj began collecting income tax records in 1922. Piketty’s new Indian data suggests a pattern that is worryingly familiar. In the West, the relative wealth of the ultra-rich dipped in the mid-20th century before bouncing back in the last two decades. India now shows the same trend, albeit mostly for different reasons. The IMF’s research last year showed that India, alongside China, is the most unequal major economy in Asia. Harvard’s Michael Walton has shown that India has an unusually high proportion of national wealth held by its swelling ranks of billionaires. Piketty’s paper broadly supports this view, showing that the share of income held by the “0.001%” has also increased rapidly. The paper’s subtitle poses a question: Is India becoming a “Billionaire Raj?” The evidence for this is now overwhelming. James Crabtree poses two questions: why does it matter, and what can be done about it?
Over recent decades, India laboured under the misapprehension that it was an egalitarian nation. This was partly a hangover from the socialist era, when the rich still lived modestly by global standards. There were no Indians on Forbes annual billionaire rankings until the mid-1990s (now there are well over 100, more than in any other country bar America, China and Russia). There were methodological issues too, namely that research often focused on consumption rather than income or wealth, giving a false picture of inequality. Beneath this there lay a peculiar intellectual consensus. On the right, thinkers like economist Jagdish Bhagwati argued that rapid growth mattered more than its distribution. But even on the left, Bhagwati’s rival Amartya Sen focused more on conditions at the bottom, and the fact that economic expansion had failed to boost indicators of human development. For both, the gap between rich and poor was a secondary concern.
There was a logic to Sen’s argument. Almost all successful economies in East Asia have grown rich by investing heavily in basic health and education, which helps poorer workers to move from farms to factories. Modern India more often looks like a Latin American economy, with a weak social safety net and yawning inequality. There are good reasons to be worried about this gap too. Mainstream economists often used to be relaxed about inequality, arguing that it at least did little to harm growth. But recent research has overturned this consensus, showing that unequal nations tend to grow more slowly and are more prone to financial instability. Unequal countries also find it harder to form the kind of social consensus needed for structural economic reforms, a point made by Harvard’s Dani Rodrik.
The reasons for Indian inequality are complicated. Some of it stems from positive factors linked to liberalisation, like entrepreneurs building large companies linked to global markets. Factors, such as rising urbanisation and increasing returns to education also play a role. Still, India appears to be growing unequal more quickly and more starkly than most, a trend that will be hard to reverse later. It also leaves a dilemma. The gap between rich and poor is likely to grow if Modi ever succeeds in his ambition of hitting double-digit growth rates. Certainly, this was what happened during the 2007 boom, a period when India’s billionaire wealth rivalled Russia, and Reliance Industries’ chairman Mukesh Ambani was briefly thought to be the richest man in the world.
Fixing this problem so that growth is more broadly shared will be complicated. But there are obvious places to start, not least tax collection, in a country where an improbably tiny 48,000 people admitted to earning more than Rs1 crore in 2015. Beyond this a far more radical agenda is needed, to improve basic social services at the bottom, while using competition policy and regulation to stamp out crony capitalism and entrenched corporate power at the top.
8) The disturbing secret behind the world’s most expensive coffee [Source: National Geographic]
The world’s most expensive coffee is made from coffee beans that are partially digested and then pooped out by the civet, a catlike creature. A cup of kopi luwak, as it’s known, can sell for as much as $80 in the United States. Found in Southeast Asia and sub-Saharan Africa, the civet has a long tail like a monkey, face markings like a raccoon, and stripes or spots on its body. It plays an important role in the food chain, eating insects and small reptiles in addition to fruits like coffee cherries and mangoes, and being eaten in turn by leopards, large snakes, and crocodiles. At first the civet coffee trade boded well for these creatures. In Indonesia, the Asian palm civet, which raids commercial fruit farms, is often seen as a pest, so the growth in the kopi luwak industry encouraged local people to protect civets for their valuable dung. Their digestive enzymes change the structure of proteins in the coffee beans, which removes some of the acidity to make a smoother cup of coffee. But as civet coffee has gained popularity, and with Indonesia growing as a tourist destination where visitors want to see and interact with wildlife, more wild civets are being confined to cages on coffee plantations. In part, this is for coffee production, but it’s also so money can be made from civet-ogling tourists.
Civet dung, studded with partially digested coffee beans, used to be collected from the wild. Increasingly, civets are instead kept in cramped, unsanitary cages on coffee plantations. A BBC undercover investigation revealed in 2013 how coffee from caged civets in inhumane conditions ends up labeled as wild civet coffee in Europe.
Researchers from Oxford University’s Wildlife Conservation Research Unit and the London-based nonprofit World Animal Protection assessed the living conditions of nearly 50 wild civets held in cages at 16 plantations on Bali. The results, published in the journal Animal Welfare, paint a grim picture. From the size and sanitation of the cages to the ability of their occupants to act like normal civets, every plantation the researchers visited failed basic animal welfare requirements. Some of the civets were very thin, from being fed a restricted diet of only coffee cherries—the fruit that surrounds the coffee bean. Some were obese, from never being able to move around freely. And some were jacked up on caffeine. But what he found most disturbing was the wire floor many of the animals were forced to stand, sit, and sleep on around the clock. Additionally, many of the civets had no access to clean water and no opportunity to interact with other civets. And they were exposed to daytime noise from traffic and tourists, which is particularly disturbing for these nocturnal animals.
All of this for a luxury item and seemingly a second-rate one, at that. Part of what makes kopi luwak so special, experts say, is that wild civets pick and choose the choicest coffee cherries to eat. Keeping civets in cages and feeding them any old cherries leads to an inferior product. Unfortunately, there’s now no way to tell whether a bag of kopi luwak was made from wild or caged civets.
9) What we learned from 5 million books – a look at Google’s Ngram viewer? [Source: TED]
Two American researchers, Erez Lieberman Aiden and Jean-Baptiste Michel, start with a startling conclusion in this TED talk: a picture is not worth a thousand words. In fact, they found some pictures that are worth 500 billion words. So how did they get to this conclusion? They use Google’s Ngram viewer to cut across almost five million books – that is 500 billion words, a string of characters a thousand times longer than the human genome. Their aim was to understand statistics about the books. So take for instance “A gleam of happiness.” It’s four words (they call it four gram). They wanted to understand how many times a particular four-gram appeared in books in 1801, 1802, 1803, all the way up to 2008. That gives us a time series of how frequently this particular sentence was used over time. Doing this for all the words and phrases that appear in those books gave them a big table of two billion lines that tell us about the way culture has been changing. For instance, they showcased how the usage of word ‘influenza’ peaked at times of big flu epidemics around the globe.
They also showcased abstract concepts using Ngram. For instance, usage of ‘year 1950’ is peculiar. Pretty much for the vast majority of history, no one gave a damn about 1950. In the mid-40s, though, there started to be a buzz. People realised that 1950 was going to happen, and it could be big. But nothing got people interested in 1950 like the year 1950. People were walking around obsessed. They couldn’t stop talking about all the things they did in 1950, all the things they were planning to do in 1950, all the dreams of what they wanted to accomplish in 1950. In fact, 1950 was so fascinating that for years thereafter, people just kept talking about all the amazing things that happened, in ’51, ’52, ’53. Finally in 1954, someone woke up and realised that 1950 had gotten somewhat passé. And just like that, the bubble burst. Similar to this sort of a ‘bubble’, there had been other fads and what they found was that the bubble bursts faster and faster with each passing year. We are losing interest in the past more rapidly.
They also point towards some sobering results. For instance, the trajectory of Marc Chagall, an artist born in 1887 showcased how he became more and more and more famous, following trajectory of any normal famous person- except if you look in German. If you look in German, you see something completely bizarre, something you pretty much never see, which is he becomes extremely famous and then all of a sudden plummets, going through a nadir between 1933 and 1945, before rebounding afterward. And of course, what we’re seeing is the fact Marc Chagall was a Jewish artist in Nazi Germany. Such examples can be used to determine if someone was censored using basic signal processing. Here’s a simple way to do it. They constructed a ‘suppression index’ by dividing a person’s actual fame by the expectation (the average of their fame before and their fame after). If the suppression index is very, very, very small, then you very well might be being suppressed. If it’s very large, maybe you’re benefiting from propaganda.
10) Google invites users to check if you’re clinically depressed [Source: Financial Times]
Google has changed the way it responds to people who are searching for information on depression, and will now invite US users to “check if you’re clinically depressed” by using a clinically-validated screening questionnaire. Although nearly one in five US adults experiences mental illness in a given year, only 41% receives treatment. The intervention comes as people increasingly seek medical advice online: Google says one in 20 searches are health-related. It is also the latest public move by a technology business to take greater responsibility for content that users see on its platform, after criticism that companies, such as Facebook and Google failed to help people distinguish verified from false information.
Facebook chief executive Mark Zuckerberg announced in March that the social network was experimenting with artificial intelligence to identify people experiencing suicidal thoughts and offer help. Google’s screening test is part of a wider effort by the company to make reliable health information readily available to its users. Google has a team, including a doctor, devoted to working on sensitive health-related searches. The company last week introduced a location-specific pollen counter to help allergy sufferers, and in 2016 launched a Body-Mass Index calculator.
A box of verified information about symptoms and treatments for clinical depression already tops US Google search results for “depression” or queries such as “do I have depression”. Google does this for other common conditions, including flu and tonsillitis, and symptoms such as headaches, using information provided by the Mayo Clinic. But for depression it has added a link inviting users to “check if you’re clinically depressed”. This takes searchers to a questionnaire widely used by doctors to measure levels of depressive symptoms. People who complete the test get a score indicating the severity of their symptoms, which can aid a physician‘s diagnosis. Google said questionnaire responses would not be recorded or stored and it would not target advertising at users based on their answers.
– Saurabh Mukherjea is CEO (Institutional Equities) and Prashant Mittal is Analyst (Strategy and Derivatives) at Ambit Capital Pvt Ltd. Views expressed are personal.