Text

Physicist, Startup Founder, Blogger, Dad

Thursday, December 14, 2017

100 Billionaires In Beijing Alone



Real talk from former Australian Prime Minister Paul Keating on the strategic outlook for Australia in Asia, the rise of China, and the likely future military balance of power in the Pacific region.

More from the Australian strategic viewpoint. Balance of power in the Western Pacific.

From the YouTube transcript:
29:18 [Eventually... Total] Chinese GDP is twice as large as America's so the idea that this great massive economy is going to be a strategic client of the United States that they are kept in line by the US 7th fleet that the US 7th fleet controls its coasts six miles off the ... territorial sea is of course nonsense but this is what the Pivot was all about. This is what Hillary Clinton and Barrack Obama's Pivot was all about was about the reestablishment of US power...

... you know it's simply unreal and if we try and become remain party to that piece of nonsense you know... that's not to say we don't need the US strategically in Asia as a balancing and conciliating power we do, but if we are party to the nonsense that we will line up for the United States to maintain its strategic hegemony in Asia over China we must have troubles...

Wednesday, December 13, 2017

Nature, Nurture, and Invention: analysis of Finnish data



What is the dominant causal mechanism for the results shown above? Is it that better family environments experienced by affluent children make them more likely to invent later in life? Is it that higher income fathers tend to pass on better genes (e.g., for cognitive ability) to their children? Obviously the explanation has important implications for social policy and for models of how the world works.

The authors of the paper below have access to patent, income, education, and military IQ records in Finland. (All males are subject to conscription.) By looking at brothers who are close in age but differ in IQ score, they can estimate the relative importance of common family environment (such as family income level or parental education level, which affect both brothers) versus the IQ difference itself. Their results suggest that cognitive ability has a stronger effect than shared family environment. Again, if one just looks at probability of invention versus family income or SES (see graph), one might mistakenly conclude that family environment is the main cause of increased likelihood of earning a patent later in life. In fact, higher family SES is also correlated to superior genetic endowments which can be passed on to the children.
The Social Origins of Inventors
Philippe Aghion, Ufuk Akcigit, Ari Hyytinen, Otto Toivanen
NBER Working Paper No. 24110
December 2017

In this paper we merge three datasets - individual income data, patenting data, and IQ data - to analyze the determinants of an individual's probability of inventing. We find that: (i) parental income matters even after controlling for other background variables and for IQ, yet the estimated impact of parental income is greatly diminished once parental education and the individual's IQ are controlled for; (ii) IQ has both a direct effect on the probability of inventing an indirect impact through education. The effect of IQ is larger for inventors than for medical doctors or lawyers. The impact of IQ is robust to controlling for unobserved family characteristics by focusing on potential inventors with brothers close in age. We also provide evidence on the importance of social family interactions, by looking at biological versus non-biological parents. Finally, we find a positive and significant interaction effect between IQ and father income, which suggests a misallocation of talents to innovation.
From the paper:
... IQ has both a direct effect on the probability of inventing which is almost five times as large as that of having a high-income father, and an indirect effect through education ...

... an R-squared decomposition shows that IQ matters more than all family background variables combined; moreover, IQ has both a direct and an indirect impact through education on the probability of inventing, and finally the impact of IQ is larger and more convex for inventors than for medical doctors or lawyers. Third, to address the potential endogeneity of IQ, we focused on potential inventors with brothers close in age. This allowed us to control for family-specific time-invariant unobservables. We showed that the effect of visuospatial IQ on the probability of inventing is maintained when adding these controls.

More on the close brothers analysis (p.24).
We look at the effect of an IQ differential between the individual and close brother(s) born at most three years apart.16 This allows us to include family fixed effects and thereby control for family-level time-invariant unobservables, such as genes shared by siblings, parenting style, and fixed family resources. Table 4 shows the results from the regression with family-fixed effects. The first column shows the baseline OLS results using the sample on brothers born at most three years apart. Notice that we include a dummy for the individual being the first born son in the family to account for birth-order effects. The second column shows the results from a regression where we introduce family fixed effects. We lose other parental characteristics than income due to their time-invariant nature.17 The main finding in Table 4 is that the coefficients on "IQ 91-95" and "IQ 96-100" [ these are percentiles, not IQ scores ] in Column 2 (i.e. when we perform the regression with family fixed effects) are the same as in the OLS Column 1. This suggests that these coefficients capture an effect of IQ on the probability of inventing which is largely independent of unobserved family background characteristics, as otherwise the OLS coefficients would be biased and different from the fixed effects estimates.

Note Added: Finland is generally more egalitarian than the US, both in terms of wealth distribution and access to education. But the probability of invention vs family income graph is qualitatively similar in both countries (see Fig 1 in the paper). The figure below is from recent US data; compare to the Finland figure at top.


Thanks to some discussion (see comments) I noticed that in the Finnish data the probability of invention seems to saturate at high incomes (see top figure, red circle), whereas it continues to rise strongly at top IQ scores (middle figure above; also perhaps in the US data above?). It would be interesting to explore this in more detail...

Friday, December 08, 2017

Recursive Cortical Networks: data efficient computer vision



Will knowledge from neuroscience inform the design of better AIs (neural nets)? These results from startup Vicarious AI suggest that the answer is yes! (See also this company blog post describing the research.)

It has often been remarked that evolved biological systems (e.g., a baby) can learn much faster and using much less data than existing artificial neural nets. Significant improvements in AI are almost certainly within reach...

Thanks to reader and former UO Physics colleague Raghuveer Parthasarathy for a pointer to this paper!
A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs

Science 08 Dec 2017: Vol. 358, Issue 6368, eaag2612
DOI: 10.1126/science.aag2612

INTRODUCTION
Compositionality, generalization, and learning from a few examples are among the hallmarks of human intelligence. CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), images used by websites to block automated interactions, are examples of problems that are easy for people but difficult for computers. CAPTCHAs add clutter and crowd letters together to create a chicken-and-egg problem for algorithmic classifiers—the classifiers work well for characters that have been segmented out, but segmenting requires an understanding of the characters, which may be rendered in a combinatorial number of ways. CAPTCHAs also demonstrate human data efficiency: A recent deep-learning approach for parsing one specific CAPTCHA style required millions of labeled examples, whereas humans solve new styles without explicit training.

By drawing inspiration from systems neuroscience, we introduce recursive cortical network (RCN), a probabilistic generative model for vision in which message-passing–based inference handles recognition, segmentation, and reasoning in a unified manner. RCN learns with very little training data and fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters. In addition, RCN outperforms deep neural networks on a variety of benchmarks while being orders of magnitude more data-efficient.

RATIONALE
Modern deep neural networks resemble the feed-forward hierarchy of simple and complex cells in the neocortex. Neuroscience has postulated computational roles for lateral and feedback connections, segregated contour and surface representations, and border-ownership coding observed in the visual cortex, yet these features are not commonly used by deep neural nets. We hypothesized that systematically incorporating these findings into a new model could lead to higher data efficiency and generalization. Structured probabilistic models provide a natural framework for incorporating prior knowledge, and belief propagation (BP) is an inference algorithm that can match the cortical computational speed. The representational choices in RCN were determined by investigating the computational underpinnings of neuroscience data under the constraint that accurate inference should be possible using BP.

RESULTS
RCN was effective in breaking a wide variety of CAPTCHAs with very little training data and without using CAPTCHA-specific heuristics. By comparison, a convolutional neural network required a 50,000-fold larger training set and was less robust to perturbations to the input. Similar results are shown on one- and few-shot MNIST (modified National Institute of Standards and Technology handwritten digit data set) classification, where RCN was significantly more robust to clutter introduced during testing. As a generative model, RCN outperformed neural network models when tested on noisy and cluttered examples and generated realistic samples from one-shot training of handwritten characters. RCN also proved to be effective at an occlusion reasoning task that required identifying the precise relationships between characters at multiple points of overlap. On a standard benchmark for parsing text in natural scenes, RCN outperformed state-of-the-art deep-learning methods while requiring 300-fold less training data.

CONCLUSION
Our work demonstrates that structured probabilistic models that incorporate inductive biases from neuroscience can lead to robust, generalizable machine learning models that learn with high data efficiency. In addition, our model’s effectiveness in breaking text-based CAPTCHAs with very little training data suggests that websites should seek more robust mechanisms for detecting automated interactions.

Wednesday, December 06, 2017

AlphaZero: learns via self-play, surpasses best humans and machines at chess


AlphaZero taught itself chess through 4 hours of self-play, surpassing the best humans and the best (old-style) chess programs in the world.
Chess24: 20 years after DeepBlue defeated Garry Kasparov in a match, chess players have awoken to a new revolution. The AlphaZero algorithm developed by Google and DeepMind took just four hours of playing against itself to synthesise the chess knowledge of one and a half millennium and reach a level where it not only surpassed humans but crushed the reigning World Computer Champion Stockfish 28 wins to 0 in a 100-game match. All the brilliant stratagems and refinements that human programmers used to build chess engines have been outdone, and like Go players we can only marvel at a wholly new approach to the game. ...
ArXiv preprint:
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.
Excerpt:
Finally, we analysed the chess knowledge discovered by AlphaZero. Table 2 analyses the most common human openings (those played more than 100,000 times in an online database of human chess games (1)). Each of these openings is independently discovered and played frequently by AlphaZero during self-play training. When starting from each human opening, AlphaZero convincingly defeated Stockfish, suggesting that it has indeed mastered a wide spectrum of chess play.

Tuesday, December 05, 2017

How Europe lost its tech companies



Some perspectives from a Berlin tech guy who has also worked in China.

To some extent Europe is like the Midwest of the US: a source of human capital for SV and other places. Europe and the Midwest have strong universities and produce talented individuals, but lack a mature tech ecosystem which includes access to venture funding, exits (acquisition by big established companies), and a culture of risk taking and innovation.

See also The next Silicon Valley? (another German guy):
My meeting in Beijing with Hugo Barra, who runs all international expansion for Xiaomi — the cool smartphone maker and highest-valued startup in China, at around $45 billion or so — was scheduled for 11 pm, but got delayed because of other meetings, so it started at midnight. (Hugo had a flight to catch at 6:30 am after that.)

In China, there is a company work culture at startups that's called 9/9/6. It means that regular work hours for most employees are from 9 am to 9 pm, six days a week. If you thought Silicon Valley has intense work hours, think again.

For founders and top executives, it's often 9/11/6.5. That's probably not very efficient and useful (who's good as a leader when they're always tired and don't know their kids?) but totally common.

Teams get locked up in hotels for weeks before a product launch, where they only work, sleep and work out, to drive 100 percent focus without distractions and make the launch date. And while I don't think long hours are any measure of productivity, I was amazed by the enormous hunger and drive. ...

Sunday, December 03, 2017

Big Ed


Today I came across a recent interview with Ed Witten in Quanta Magazine. The article has some nice photos like the one above. I was struck by the following quote from Witten ("It from Qubit!"):
When I was a beginning grad student, they had a series of lectures by faculty members to the new students about theoretical research, and one of the people who gave such a lecture was Wheeler. He drew a picture on the blackboard of the universe visualized as an eye looking at itself. I had no idea what he was talking about. It’s obvious to me in hindsight that he was explaining what it meant to talk about quantum mechanics when the observer is part of the quantum system. I imagine there is something we don’t understand about that.  [ Italics mine ]
The picture he refers to is reproduced below.


This question has been of interest to me since I was first exposed to quantum mechanics, although I put it off for a long time because quantum foundations is not considered a respectable area by most physicists! Of course it should be obvious that if quantum mechanics is to be a universal theory of nature, then observers like ourselves can't help but be part of the (big) quantum system.

See related posts Feynman and Everett, Schwinger on Quantum Foundations, Gell-Man on Quantum Foundations, and Weinberg on Quantum Foundations.

Here's a similar figure, meant to represent the perspective of an observer inside the wavefunction of the universe (which evolves deterministically and unitarily; the degrees of freedom of the observer's mind are part of the Hilbert space of Psi; time runs vertically and Psi evolves into exp(-iHT) Psi while we are "inside" :-). The figure was drawn on the whiteboard of my University of Oregon office and persisted there for a year or more. I doubt any visitors (other than perhaps one special grad student) understood what it was about.



For some powerful Witten anecdotes like the one below, see here. (If you don't know who Ed Witten is this should clarify things a bit!)
I met him in Boston in 1977, when I was getting interested in the connection between physics and mathematics. I attended a meeting, and there was this young chap with the older guys. We started talking, and after a few minutes I realized that the younger guy was much smarter than the old guys. He understood all the mathematics I was talking about, so I started paying attention to him. That was Witten. And I’ve kept in touch with him ever since.

In 2001, he invited me to Caltech, where he was a visiting professor. I felt like a graduate student again. Every morning I would walk into the department, I’d go to see Witten, and we’d talk for an hour or so. He’d give me my homework. I’d go away and spend the next 23 hours trying to catch up. Meanwhile, he’d go off and do half a dozen other things. We had a very intense collaboration. It was an incredible experience because it was like working with a brilliant supervisor. I mean, he knew all the answers before I got them. If we ever argued, he was right and I was wrong. It was embarrassing!

(Fields Medalist Michael Atiyah, on what it was like to collaborate with Witten)
The closest thing I have read to a personal intellectual history of Witten is his essay Adventures in Physics and Math, which I highly recommend. The essay addresses some common questions, such as What was Ed like as a kid? How did he choose a career in Physics? How does he know so much Mathematics? For example,
At about age 11, I was presented with some relatively advanced math books. My father is a theoretical physicist and he introduced me to calculus. For a while, math was my passion. My parents, however, were reluctant to push me too far, too fast with math (as they saw it) and so it was a long time after that before I was exposed to any math that was really more advanced than basic calculus. I am not sure in hindsight whether their attitude was best or not.
A great video, suggested by a commenter:

Thursday, November 30, 2017

CMSE (Computational Mathematics, Science and Engineering) at MSU



At Oregon I was part of an interdisciplinary institute that included theoretical physicists and chemists, mathematicians, and computer scientists. We tried to create a program (not even a new department, just an interdisciplinary program) in applied math and computation, but failed due to lack of support from higher administration. When I arrived at MSU as VPR I learned that the faculty here had formulated a similar plan for a new department. Together with the Engineering dean and the Natural Sciences dean we pushed it through and created an entirely new department in just a few years. This new department already has a research ranking among the top 10 in the US (according to Academic Analytics).

Computational Mathematics, Science and Engineering at MSU.


IQ (Institute for Quantitative Health Science and Engineering) at MSU



Chris Contag is the founding director of the Institute for Quantitative Health Science and Engineering and the chairperson of the new Department of Biomedical Engineering in the College of Engineering.

Contag was previously a professor in the Departments of Pediatrics, Radiology, Bioengineering and Microbiology and Immunology at Stanford University. He held the titles of associate chief of Neonatal and Developmental Medicine, director of Stanford’s Center for Innovation in In Vivo Imaging and co-director of the Molecular Imaging Program. Among the new faculty recruited to IQ are researchers previously on the faculties at Stanford, Harvard, and Johns Hopkins University.

Below are some photos from the annual progress report meeting I attended yesterday.






Monday, November 27, 2017

The nuclear physics of neutron star mergers at MSU's FRIB


Science reports on MSU's Facility for Rare Isotope Beams, which will probe the properties of nuclear matter.
Science: Last month, astronomers wowed the world when they announced that they had seen two neutron stars merge, apparently creating heavy elements such as gold and platinum and spewing them into space. Nuclear physicists here at Michigan State University (MSU) also cheered the find. They are building an atom smasher, the $730 million Facility for Rare Isotope Beams (FRIB), that could decipher exactly how those elements were forged in the inferno. “We were hoping to see an event like this someday,” says Witold Nazarewicz, an MSU theorist and FRIB's chief scientist.

First proposed in 1999, the project didn't get the greenlight for construction from the Department of Energy (DOE) until 2014. But since then, progress has been rapid. In what was a grassy patch behind MSU's existing nuclear physics laboratory now stands an imposing 200-meter-long building. In its basement, technicians are installing the first section of FRIB's 500-meter-long linear accelerator, which will shoot beams of nuclei ranging from hydrogen to uranium into a graphite target to blast out short-lived new isotopes. In this context, isotope is just another word for nucleus—one that makes for a better acronym.

The accelerator at the Facility for Rare Isotope Beams will create short-lived nuclei thought to be forged in neutron star mergers.

The project is on budget and ahead of schedule, and most of the major technological puzzles have been solved, says Thomas Glasmacher, FRIB's project director. “We don't have anything that we don't know how to do,” he says. Other physicists are impressed with the progress. “The moment they could, they ran with this project,” says Kate Jones, an experimental nuclear physicist at the University of Tennessee in Knoxville. “It's very impressive when you look down in the basement and see all the kit they've got.”

FRIB's nuclei will be key to understanding how neutron-star mergers make heavy elements. Spotted by gravitational-wave detectors in the United States and Italy and telescopes around the world, the violent collision produced an afterglow that over days faded from bright blue to dimmer red (Science, 20 October, p. 282). The light show jibed with astrophysicists' model of a so-called kilonova, in which the disintegrating neutron stars fling neutron-rich matter into space. The model predicts that in the debris, a chain of nuclear interactions known as the rapid neutron process, or r-process, quickly generates most of the elements heavier than iron. (Other elements emerge from supernova explosions and the deaths of smaller stars, from cosmic ray interactions, and also as leftovers lingering from the big bang.)

For astrophysicists, the observation marked a triumph for the kilonova model. For nuclear physicists, it's just the beginning. In the r-process, a nucleus gains weight by gobbling up one neutron after another. At the same time, the nucleus can change its chemical identity through radioactive beta decay, which turns a neutron into a proton and bumps the nucleus up the periodic table of elements. Exactly how the nucleus evolves depends on the speed of the decay and the probability that it will soak up another neutron.

Those parameters are poorly known. “Honestly, the nuclear physics is not in good shape,” says MSU nuclear astrophysicist Hendrik Schatz. “Most of the nuclei involved have not been identified and the theory has not been developed.” FRIB aims to change that by making as many of the neutron-laden nuclei as possible and measuring their masses and lifetimes. That might seem like a hopeless task, as the r-process involves dozens of intermediate nuclei. However, only a few key nuclei—the slowest decayers and absorbers—should act as bottlenecks to control the process and determine which elements are made in greatest abundance, Schatz explains.

Such data would better constrain models of heavy element production in neutron star mergers. The abundances could then be compared with those observed in the universe to determine whether merging neutron stars are the only astrophysical sites of the r-process, Jones says. Many astrophysicists have suggested as much, but that's a leap, she says. “It's very easy to say, ‘Oh, we've found the site for the r-process—Well done!’ In reality this is just opening a door.”

The 1400 physicists who have signed up to use FRIB will perform many other experiments, ranging from trapping and measuring the properties of a single exotic nucleus, to measuring a hail of novel nuclei scattering off a particular target nucleus. Data from the experiments will feed into a more comprehensive theory of the structure of the nucleus, Nazarewicz says. Physicists already have a fundamental theory of the innards of protons and neutrons, particles called quarks and gluons, and how they interact. But using that theory, known as quantum chromodynamics, to predict nuclear structure is effectively impossible: It is so computationally complex that supercomputers are needed just to simulate the proton and neutron.

To model the behavior of nuclei, theorists now rely on various approximate “effective theories” that work for some nuclei but not others. FRIB's grandest goal, Nazarewicz says, is to develop a deeper understanding that will enable theorists to weave these disparate and sometimes discordant theories together into a coherent whole.

First, researchers have to finish the accelerator. In September, they fired a test beam through its first section, made of copper cavities that work at room temperature. They are now installing the main accelerating modules, which are made of superconducting niobium and must be chilled with liquid helium to 2 K. Researchers hope to send beams through the cold accelerator next year. In 2021, they plan to tear down a wall and connect the finished accelerator to the existing lab so that new experiments can begin. ...
Note this is an old schematic, from 2008 or so.

Remarks on the Decline of American Empire



Some gloomy remarks on the decline of the American Empire.

1. US foreign policy over the last decades has been disastrous -- trillions of dollars and thousands of lives expended on Middle Eastern wars, culminating in utter defeat. This defeat is still not acknowledged among most of the media or what passes for intelligentsia in academia and policy circles, but defeat it is. Iran now exerts significant control over Iraq and a swath of land running from the Persian Gulf to the Mediterranean. None of the goals of our costly intervention have been achieved. We are exhausted morally, financially, and militarily, and still have not fully extricated ourselves from a useless morass. George W. Bush should go down in history as the worst US President of the modern era.

2. We are fortunate that the fracking revolution may lead to US independence from Middle Eastern energy. But policy elites have to fully recognize this possibility and pivot our strategy to reflect the decreased importance of the region. The fracking revolution is a consequence of basic research from decades ago (including investment from the Department of Energy) and the work of private sector innovators and risk-takers.

3. US budget deficits are a ticking time bomb, which cripple investment in basic infrastructure and also in research that creates strategically important new technologies like AI. US research spending has been roughly flat in inflation adjusted dollars over the last 20 years, declining as a fraction of GDP.

4. Divisive identity politics and demographic trends in the US will continue to undermine political cohesion and overall effectiveness of our institutions. ("Civilizational decline," as one leading theoretical physicist observed to me recently, remarking on our current inability to take on big science projects.)

5. The Chinese have almost entirely closed the technology gap with the West, and dominate important areas of manufacturing. It seems very likely that their economy will eventually become significantly larger than the US economy. This is the world that strategists have to prepare for. Wars involving religious fanatics in unimportant regions of the world should not distract us from a possible future conflict with a peer competitor that threatens to match or exceed our economic, technological, and even military capability.

However, I'm not sure that OBOR (One Belt One Road) and a focus on the "world island" of Eurasia will be a winning strategy for China. Mackinder's dream of a unified or even fully economically integrated world island will have to overcome the limitations (in human capital, institutions, culture, etc.) of the under-developed middle...


More McCoy and Mackinder. RAND study on war with China mentioned by McCoy in the video above is linked here.

See also The Stages of Empire:
The empires Glubb studied had a lifespan of about ten human generations, or two hundred and fifty years, despite changing factors such as technology. Glubb describes a pattern of growth and decline, with six stages: the Ages of Pioneers, Conquest, Commerce, Affluence, Intellect and Decadence. He pointedly avoided writing about India or China, focusing rather on middle and western Eurasia, stating that his knowledge was inadequate to the task.

Note that six stages in 10 generations means that significant change can occur over one or two generations -- a nation can pass from one age to the next, as I believe we have in America during my lifetime.

... There does not appear to be any doubt that money is the agent which causes the decline of this strong, brave and self-confident people. The decline in courage, enterprise and a sense of duty is, however, gradual. The first direction in which wealth injures the nation is a moral one. Money replaces honour and adventure as the objective of the best young men. Moreover, men do not normally seek to make money for their country or their community, but for themselves. Gradually, and almost imperceptibly, the Age of Affluence silences the voice of duty. The object of the young and the ambitious is no longer fame, honour or service, but cash. Education undergoes the same gradual transformation. No longer do schools aim at producing brave patriots ready to serve their country. [ Or to discover great things for all mankind! ] Parents and students alike seek the educational qualifications which will command the highest salaries. ...

Duty, Honor, Country:

The unbelievers will say they are but words, but a slogan, but a flamboyant phrase. Every pedant, every demagogue, every cynic, every hypocrite, every troublemaker, and I am sorry to say, some others of an entirely different character, will try to downgrade them even to the extent of mockery and ridicule.

The 21st century American reality (the Age of Decadence):

"Yeah, I calculated the NPV, and, you know, it's just not worth it for me. I really believe in your project, though. And, I share your passion. Good luck."

Tuesday, November 21, 2017

DOJ invokes Title VI against Harvard admissions


“Elections have consequences..." -- Barack Obama

See 20 years @15 percent: does Harvard discriminate against Asian-Americans?
CNN: The Justice Department is actively investigating Harvard University's use of race in its admissions policies and has concluded the school is "out of compliance" with federal law, according to documents obtained by CNN. ...

[Click through for DOJ letter to Harvard. Harvard refused to supply admissions data to DOJ as requested for Title VI investigation of bias against Asian-Americans.]
Wall Street Journal
WSJ: ... The Justice Department, whose Civil Rights Division is conducting the investigation into similar allegations, said in a letter to Harvard’s lawyers, dated Nov. 17 and reviewed by the Journal, that the school was being investigated under Title VI of the Civil Rights Act of 1964, which bars discrimination on the basis of race, color and national origin for organizations that receive federal funding. The letter also said the school had failed to comply with a Nov. 2 deadline to provide documents related to the university’s admissions policies and practices.

The department told Harvard it “may file a lawsuit” to enforce compliance if Harvard doesn’t hand over the documents by Dec. 1, according to a separate letter dated Nov. 17 from John M. Gore, the acting assistant attorney general for the Civil Rights Division.

... if a federal judge finds Harvard has violated Title VI, the court has broad authority to issue a remedy, such as ordering the university to change its admissions policies, the experts say.

Schools in violation of Title VI can also lose access to federal funds.
From DOJ web site:
TITLE VI OF THE CIVIL RIGHTS ACT OF 1964
42 U.S.C. § 2000D ET SEQ.
OVERVIEW OF TITLE VI OF THE CIVIL RIGHTS ACT OF 1964

Title VI, 42 U.S.C. § 2000d et seq., was enacted as part of the landmark Civil Rights Act of 1964. It prohibits discrimination on the basis of race, color, and national origin in programs and activities receiving federal financial assistance. As President John F. Kennedy said in 1963:

Simple justice requires that public funds, to which all taxpayers of all races [colors, and national origins] contribute, not be spent in any fashion which encourages, entrenches, subsidizes or results in racial [color or national origin] discrimination.

Monday, November 20, 2017

Saturday, November 18, 2017

Robot Overlords and the Academy


In a previous post Half of all jobs (> $60k/y) coding related? I wrote
In the future there will be two kinds of jobs. Workers will either

Tell computers what to do    
      or
Be told by computers what to do
I've been pushing Michigan State University to offer a coding bootcamp experience to all undergraduates who want it: e.g., Codecademy.com. The goal isn't to turn non-STEM majors into software developers, but to give all interested students exposure to an increasingly important and central aspect of the modern world.

I even invited the CodeNow CEO to campus to help push the idea. We're still working on it at the university -- painfully SLOWLY, if you ask me. But this fall I learned my kids are taking a class based on Codecademy at their middle school! Go figure.

(Image via 1, 2)

Wednesday, November 15, 2017

Behold, the Super Cow




Hmm... how do they compute the Net Merit and GTPI? (But, but, what about all of that missing heritability?)

See also

Applied genomics: the genetic "super cow"

Genomic prediction: no bull.

Attention climate virtue signalers: more efficient cows produce less methane per liter of milk! Drink milk from genetically engineered cows :-)

Friday, November 10, 2017

23andMe



I'm in Mountain View to give a talk at 23andMe. Their latest funding round was $250M on a (reported) valuation of $1.5B. If I just add up the Crunchbase numbers it looks like almost half a billion invested at this point...

Slides: Genomic Prediction of Complex Traits
Abstract: We apply methods from Compressed Sensing (L1-penalized regression; Donoho-Tanner phase transition with noise) to the UKBB dataset of 500k SNP genotypes. We construct genomic predictors for several complex traits. Our height predictor captures nearly all of the predicted SNP heritability for this trait -- thereby resolving the missing heritability problem. Actual heights of most individuals in validation tests are within a few cm of predicted heights. I also discuss application of these methods to polygenic disease risk: sparsity estimates (of the number of causal loci), combined with phase transition scaling analysis, allow estimates of the amount of case | control data required to construct good predictors.
Here's how people + robots handle your spit sample to produce a SNP genotype:

Wednesday, November 08, 2017

Pocket AI from Beijing and Smartphones

I need to replace my old iPhone 6, and, predictably, this led me down the rabbit hole of learning about mobile phones, the mobile industry, and even mobile technologies. Some quick remarks: from the least to most expensive phones, Chinese companies are now competitive with industry leaders like Samsung and Apple. The Chinese market is hyper-competitive: small innovative startups (Oppo, OnePlus, etc.) compete with medium sized entities (e.g., Xiaomi, only recently a small startup itself) and giants like Huawei and Lenovo (Motorola). To gauge the landscape, watch phone reviews by Indian techies (or this guy in Germany), who tend to be very focused on cost performance and have access to handsets not sold in the US.

Here's a short video about OnePlus which also explains a bit about the Shenzhen hardware ecosystem:




Huawei's Kirin 970 chipset includes a dedicated "Neural Processor Unit" (NPU), optimized for the matrix operations used in machine learning. An NPU allows the phone to execute ML code for tasks such as image and voice recognition, language translation, etc. without relying on cloud connectivity. At the moment it is mostly a marketing gimmick, but one can imagine in a few years (perhaps earlier!) the NPU could be as important to the phone experience as the GPU.

Here's a review of the Mate 10 Pro, Huawei's $1k flagship phone, with a brief demo of some of the AI features:



The NPU appears to be based on technology licensed from a small Beijing startup, Cambricon. The founder is an alumnus of the Special Class for Gifted Young, University of Science and Technology of China. I've reviewed many Physics PhD applications from 19 year old graduates of this program. There is an SV bidding war over chip designers in this area, ever since the advent of Google's proprietary TPU (and software package Tensorflow), which accounts for most of its computation at data centers around the world.

Here's a quick demo of text recognition and machine translation from Chinese to English:




Some marketing video about the AI processor:




From cat recognition to Her or Joi? How long? I was recently offered the opportunity to be a beta tester for a startup that is building a smartphone AI assistant. I was intrigued but didn't want to give them access to all of my information...




PS One of the reasons I am leaving iOS for Android is that Google Assistant is getting very good, whereas in my experience Siri is terrible!

Wednesday, November 01, 2017

The Future is Here: Genomic Prediction in MIT Technology Review


MIT Technology Review reports on our startup Genomic Prediction. Some basic points worth clarifying:

1. GP's first product, announced at the annual ASRM (American Society of Reproductive Medicine) meeting this week, tests chromosomal abnormality. It is a less expensive but more accurate version of existing tests.



2. The polygenic product, to be launched in 2018, checks for hundreds of known single-gene ("Mendelian") disease risks, and will likely have some true polygenic predictive capabilities. This last part is the main emphasis of the story, but it is just one component of the overall product offering. The article elides a lot of challenging laboratory work on DNA amplification, etc.

3. GP will only deliver results requested by an IVF physician. It is not a DTC (Direct to Consumer) company.

4. All medical risk analysis proceeds from statistical data (analyzing groups of people) to produce recommendations concerning a specific individual.

5. I am on the Board of Directors of GP but am not an employee of the company.

MIT Technology Review

Eugenics 2.0: We’re at the Dawn of Choosing Embryos by Health, Height, and More


Will you be among the first to pick your kids’ IQ? As machine learning unlocks predictions from DNA databases, scientists say parents could have choices never before possible.


Nathan Treff was diagnosed with type 1 diabetes at 24. It’s a disease that runs in families, but it has complex causes. More than one gene is involved. And the environment plays a role too.

So you don’t know who will get it. Treff’s grandfather had it, and lost a leg. But Treff’s three young kids are fine, so far. He’s crossing his fingers they won’t develop it later.

Now Treff, an in vitro fertilization specialist, is working on a radical way to change the odds. Using a combination of computer models and DNA tests, the startup company he’s working with, Genomic Prediction, thinks it has a way of predicting which IVF embryos in a laboratory dish would be most likely to develop type 1 diabetes or other complex diseases. Armed with such statistical scorecards, doctors and parents could huddle and choose to avoid embryos with failing grades.

IVF clinics already test the DNA of embryos to spot rare diseases, like cystic fibrosis, caused by defects in a single gene. But these “preimplantation” tests are poised for a dramatic leap forward as it becomes possible to peer more deeply at an embryo’s genome and create broad statistical forecasts about the person it would become.

The advance is occurring, say scientists, thanks to a growing flood of genetic data collected from large population studies. ...

Spotting outliers

The company’s plans rely on a tidal wave of new knowledge showing how small genetic differences can add up to put one person, but not another, at high odds for diabetes, a neurotic personality, or a taller or shorter height. Already, such “polygenic risk scores” are used in direct-to-consumer gene tests, such as reports from 23andMe that tell customers their genetic chance of being overweight.

For adults, risk scores are little more than a novelty or a source of health advice they can ignore. But if the same information is generated about an embryo, it could lead to existential consequences: who will be born, and who stays in a laboratory freezer.

“I remind my partners, ‘You know, if my parents had this test, I wouldn’t be here,’” says Treff, a prize-winning expert on diagnostic technology who is the author of more than 90 scientific papers.

Genomic Prediction was founded this year and has raised funds from venture capitalists in Silicon Valley, though it declines to say who they are. Tellier, whose inspiration is the science fiction film Gattaca, says the company plans to offer reports to IVF doctors and parents identifying “outliers”—those embryos whose genetic scores put them at the wrong end of a statistical curve for disorders such as diabetes, late-life osteoporosis, schizophrenia, and dwarfism, depending on whether models for those problems prove accurate. ...

This week, Genomic Prediction manned a booth at the annual meeting of the American Society for Reproductive Medicine. That organization, which represents fertility doctors and scientists, has previously said it thinks testing embryos for late-life conditions, like Alzheimer’s, would be “ethically justified.” It cited, among other reasons, the “reproductive liberty” of parents.

... Hsu’s prediction is that “billionaires and Silicon Valley types” will be the early adopters of embryo selection technology, becoming among the first “to do IVF even though they don’t need IVF.” As they start producing fewer unhealthy children, and more exceptional ones, the rest of society could follow suit.

“I fully predict it will be possible,” says Hsu of selecting embryos with higher IQ scores. “But we’ve said that we as a company are not going to do it. It’s a difficult issue, like nuclear weapons or gene editing. There will be some future debate over whether this should be legal, or made illegal. Countries will have referendums on it.”

Thursday, October 26, 2017

The Physicist and the Neuroscientist: A Tale of Two Connectomes



This is video of an excellent talk on the human connectome by neuroscientist Bobby Kasthuri of Argonne National Lab and the University of Chicago. (You can see me sitting on the floor in the corner :-)

The story below is for entertainment purposes only. No triggering of biologists is intended.
The Physicist and the Neuroscientist: A Tale of Two Connectomes

Steve burst into Bobby's lab, a small metal box under one arm. Startled, Bobby nearly knocked over his Zeiss electron microscope.

I've got it! shouted Steve. My former student at DeepBrain sent me one of their first AGI's. It's hot out of their 3D neuromorphic chip printer.

This is the thing that talks and understands quantum mechanics? asked Bobby.

Yes, if I just plug it in. He tapped the box -- This deep net has 10^10 connections! Within spitting distance of our brains, but much more efficient. They trained it in their virtual simulator world. Some of the algos are based on my polytope paper from last year. It not only knows QM, it understands what you mean by "How much is that doggie in the window?" :-)

Has anyone mapped the connections?

Sort of, I mean the strengths and topology are determined by the training and algos... It was all done virtually. Printed into spaghetti in this box.

We've got to scan it right away! My new rig can measure 10^5 connections per second!

What for? It's silicon spaghetti. It works how it works, but we created it! Specific connections... that's like collecting postage stamps.

No, but we need to UNDERSTAND HOW IT WORKS!

...

Why don't you just ask IT? thought Steve, as he left Bobby's lab.
More Bobby, with more hair.

Wednesday, October 25, 2017

AlphaGo Zero: algorithms over data and compute



AlphaGo Zero was trained entirely through self-play -- no data from human play was used. The resulting program is the strongest Go player ever by a large margin, and is extremely efficient in its use of compute (running on only 4 TPUs).
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.
Rapid progress from a random initial state is rather amazing, but perhaps something we should get used to given that:

1. Deep Neural Nets are general enough to learn almost any function (i.e., high dimensional mathematical function) no matter how complex
2. The optimization process is (close to) convex

A widely discussed AI mystery: how do human babies manage to learn (language, intuitive physics, theory of mind) so quickly and with relatively limited training data? AlphaGo Zero's impressive results are highly suggestive in this context -- the right algorithms make a huge difference.

It seems certain that great things are coming in the near future...

Sunday, October 22, 2017

Steven Weinberg: What's the matter with quantum mechanics?



In this public lecture Weinberg explains the problems with the two predominant interpretations of quantum mechanics, which he refers to as Instrumentalist (e.g., Copenhagen) and Realist (e.g., Many Worlds). The term "interpretation" may be misleading because what is ultimately at stake is the nature of physical reality. Both interpretations have serious problems, but the problem with Realism (in Weinberg's view, and my own) is not the quantum multiverse, but rather the origin of probability within deterministic Schrodinger evolution. Instrumentalism is, of course, ill-defined nutty mysticism 8-)

Physicists will probably want to watch this at 1.5x or 2x speed. The essential discussion is at roughly 22-40min, so it's only a 10 minute investment of your time. These slides explain in pictures.

See also Weinberg on Quantum Foundations, where I wrote:
It is a shame that very few working physicists, even theoreticians, have thought carefully and deeply about quantum foundations. Perhaps Weinberg's fine summary will stimulate greater awareness of this greatest of all unresolved problems in science.
and quoted Weinberg:
... today there is no interpretation of quantum mechanics that does not have serious flaws. 
Posts on this blog related to the Born Rule, etc., and two of my papers:
The measure problem in many worlds quantum mechanics

On the origin of probability in quantum mechanics

Dynamical theories of wavefunction collapse are necessarily non-linear generalizations of Schrodinger evolution, which lead to problems with locality.

Among those who take the Realist position seriously: Feynman and Gell-Mann, Schwinger, Hawking, and many more.

Thursday, October 19, 2017

Talking Ta-Nehisi Coates, Seriously?



Glenn Loury is Merton P. Stoltz Professor of the Social Sciences, Department of Economics, Brown University. John McWhorter is Associate Professor of English and Comparative Literature at Columbia University, where he teaches linguistics, American studies, philosophy, and music history.
Loury (@19min): "He's a good writer but not a deep thinker, and he's being taken seriously as if he was a deep thinker... he's talented I mean there's not any doubt about that but the actual analytical content of the argument, there are gaping holes in it..."
On the dangers of Identity Politics:
Loury (@21min): Coates' immersion in a racialist conception of American Society ... everything through the lens of race ... is the mirror image or the flip side of a white nationalist conception about American society in which everything is viewed in terms of race and Williams in the review includes extensive reportage from his interview of Richard Spencer the white nationalist leader ... and has Spencer saying back to him in effect I'm glad that people eatin' up Tallahassee cause I'm glad that they're taking it in because it's a thoroughly racialized conception. It's racial essentialism at its utmost and that primes them: they really believe in race, these liberals who are reading Coates, and that means I can flip them says Richard Spencer. The day will come given their belief in race -- I can persuade them that they're white. Coates wants that they regret and lament and eschew the fact that they're white. Richard Spencer dreams of a day in which, them seeing themselves as white, they'll get tired of hating themselves and flip over to the side of being proud ...
I've been reading Coates for years, since he was a relatively unknown writer at The Atlantic. Here are very good Longform Podcast interviews which explore his early development: 2015, 2014, 2012.

Mentioned in the discussion: Thomas Chatterton Williams, New York Times, How Ta-Nehisi Coates Gives Whiteness Power.

More links.

Tuesday, October 17, 2017

Super Green Smoothie


Spinach (see picture)
Handful of frozen blueberries
Small handful of nuts (pecans, almonds, etc.)
1/2 scoop protein powder
1-2 cups milk

Makes 2 large glasses of nutritious green super smoothie. Give the other one to your spouse or kid or roommate, or just use half the recipe  :-)

Rinse out the blender container immediately with warm water for easy clean up.

Most of the volume is spinach, so calorie density is low, while antioxidant and nutritional content is high.

Smoothie diet: drink one glass (~250 calories, 20g protein), wait 15 minutes, all hunger will vanish for 90+ minutes.



(Photo quality meh because I took them using a $40 Moto E (Android) I have been experimenting with. Over Xmas last year I researched cheap Android phones for my kids. Lots of very good devices for ~$100 or less. The carrier / data costs dwarf the cost of the handset.)

Monday, October 09, 2017

Blade Runner 2049: Demis Hassabis (Deep Mind) interviews director Villeneuve



Hassabis refers to AI in the original Blade Runner, but it is apparent from the sequel that replicants are merely genetically engineered humans. AI appears in Blade Runner 2049 in the form of Joi. There seems to be widespread confusion, including in the movie itself, about whether to think about replicants as robots (i.e., hardware) with "artificial" brains, or simply superhumans engineered (by manipulation of DNA and memories) to serve as slaves. The latter, while potentially very alien psychologically (detectable by Voight-Kampff machine, etc.), presumably have souls like ours. (Hassabis refers to Rutger Hauer's decision to have Roy Batty release the dove when he dies as symbolic of Batty's soul escaping from his body.)

Dick himself seems a bit imprecise in his use of the term android (hardware or wet bioware?) in this context. "Electric" sheep? In a bioengineered android brain that is structurally almost identical to a normal human's?

Q&A at 27min is excellent -- concerning the dispute between Ridley Scott and Harrison Ford as to whether Deckard is a replicant, and how Villeneuve handled it, inspired by the original Dick novel.







Addendum: Blade Runner, meet Alien

The Tyrell-Weyland connection

Robots (David, of Alien Prometheus) vs Genetically Engineered Slaves (replicants) with false memories



Saturday, October 07, 2017

Information Theory of Deep Neural Nets: "Information Bottleneck"



This talk discusses, in terms of information theory, how the hidden layers of a deep neural net (thought of as a Markov chain) create a compressed (coarse grained) representation of the input information. To date the success of neural networks has been a mainly empirical phenomenon, lacking a theoretical framework that explains how and why they work so well.

At ~44min someone asks how networks "know" to construct (local) feature detectors in the first few layers. I'm not sure I followed Tishby's answer but it may be a consequence of the hierarchical structure of the data, not specific to the network or optimization.
Naftali (Tali) Tishby נפתלי תשבי

Physicist, professor of computer science and computational neuroscientist
The Ruth and Stan Flinkman professor of Brain Research
Benin school of Engineering and Computer Science
Edmond and Lilly Safra Center for Brain Sciences (ELSC)
Hebrew University of Jerusalem, 96906 Israel

I work at the interfaces between computer science, physics, and biology which provide some of the most challenging problems in today’s science and technology. We focus on organizing computational principles that govern information processing in biology, at all levels. To this end, we employ and develop methods that stem from statistical physics, information theory and computational learning theory, to analyze biological data and develop biologically inspired algorithms that can account for the observed performance of biological systems. We hope to find simple yet powerful computational mechanisms that may characterize evolved and adaptive systems, from the molecular level to the whole computational brain and interacting populations.
Another Tishby talk on this subject.

Tuesday, October 03, 2017

A Gentle Introduction to Neural Networks



"A gentle introduction to the principles behind neural networks, including backpropagation. Rated G for general audiences."

This very well done. If you have a quantitative background you can watch it at 1.5x or 2x speed, I think :-)

A bit more on the history of backpropagation and convexity: why is the error function convex, or nearly so?

Friday, September 29, 2017

The Vector Institute


I've waxed enthusiastic before about Thought Vectors:
... the space of concepts (primitives) used in human language (or equivalently, in human thought) ... has only ~1000 dimensions, and has some qualities similar to an actual vector space. Indeed, one can speak of some primitives being closer or further from others, leading to a notion of distance, and one can also rescale a vector to increase or decrease the intensity of meaning.

... we now have an automated method to extract an abstract representation of human thought from samples of ordinary language. This abstract representation will allow machines to improve dramatically in their ability to process language, dealing appropriately with semantics (i.e., meaning), which is represented geometrically.
Apparently I am not the only one (MIT Technology Review):
... The Vector Institute, this monument to the ascent of ­Hinton’s ideas, is a research center where companies from around the U.S. and Canada—like Google, and Uber, and Nvidia—will sponsor efforts to commercialize AI technologies. Money has poured in faster than Jacobs could ask for it; two of his cofounders surveyed companies in the Toronto area, and the demand for AI experts ended up being 10 times what Canada produces every year. Vector is in a sense ground zero for the now-worldwide attempt to mobilize around deep learning: to cash in on the technique, to teach it, to refine and apply it. Data centers are being built, towers are being filled with startups, a whole generation of students is going into the field.

... words that have similar meanings start showing up near one another in the space. That is, “insane” and “unhinged” will have coordinates close to each other, as will “three” and “seven,” and so on. What’s more, so-called vector arithmetic makes it possible to, say, subtract the vector for “France” from the vector for “Paris,” add the vector for “Italy,” and end up in the neighborhood of “Rome.” It works without anyone telling the network explicitly that Rome is to Italy as Paris is to France.

... Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world. Hinton believes this is what the brain itself does. “If you want to know what a thought is,” he says, “I can express it for you in a string of words. I can say ‘John thought, “Whoops.”’ But if you ask, ‘What is the thought? What does it mean for John to have that thought?’ It’s not that inside his head there’s an opening quote, and a ‘Whoops,’ and a closing quote, or even a cleaned-up version of that. Inside his head there’s some big pattern of neural activity.” Big patterns of neural activity, if you’re a mathematician, can be captured in a vector space, with each neuron’s activity corresponding to a number, and each number to a coordinate of a really big vector. In Hinton’s view, that’s what thought is: a dance of vectors.

... It is no coincidence that Toronto’s flagship AI institution was named for this fact. Hinton was the one who came up with the name Vector Institute.
See also Geoff Hinton on Deep Learning (discusses thought vectors).

Thursday, September 28, 2017

Feynman, Schwinger, and Psychometrics

Slate Star Codex has a new post entitled Against Individual IQ Worries.
I write a lot about the importance of IQ research, and I try to debunk pseudoscientific claims that IQ “isn’t real” or “doesn’t matter” or “just shows how well you do on a test”. IQ is one of the best-studied ideas in psychology, one of our best predictors of job performance, future income, and various other forms of success, etc.

But every so often, I get comments/emails saying something like “Help! I just took an IQ test and learned that my IQ is x! This is much lower than I thought, and so obviously I will be a failure in everything I do in life. Can you direct me to the best cliff to jump off of?”

So I want to clarify: IQ is very useful and powerful for research purposes. It’s not nearly as interesting for you personally.
I agree with Scott's point that while g is useful as a crude measurement of cognitive ability, and a statistical predictor of life outcomes, one is better off adopting the so-called growth mindset. ("Individuals who believe their talents can be developed through hard work, good strategies, and input from others have a growth mindset.")



Inevitably the question of Feynman's IQ came up in the discussion. I wrote to Scott about this (slightly edited):
Dear Scott,

I enjoyed your most recent SSC post and I agree with you that g is better applied at a statistical level (e.g., by the Army to place recruits) than at an individual level.

I notice Feynman came up again in the discussion. I have written more on this topic (and have done more research as well). My conclusions are as follows:

1. There is no doubt Feynman would have scored near the top of any math-loaded test (and he did -- e.g., the Putnam).

2. I doubt Feynman would have scored near the ceiling on many verbally loaded tests. He often made grammatical mistakes, spelling mistakes (even of words commonly used in physics), etc. He occasionally did not know the *meanings* of terms used by other people around him (even words commonly used in physics).

3. By contrast, his contemporary and rival Julian Schwinger wrote and spoke in elegant, impeccable language. People often said that Schwinger "spoke in entire paragraphs" that emerged well-formed from his mouth. My guess is that Schwinger was a more balanced type for that level of cognitive ability. Feynman was verbally creative, colorful, a master communicator, etc. But his score on the old SAT-V might not have been above top few percentile.

Part of the reason more people know about Feynman than Schwinger is not just that Feynman was more colorful and charismatic. In fact, very little that Schwinger ever said or wrote was comprehensible to people below a pretty high IQ threshold, whereas Feynman expressed himself simply and intuitively. I think this has a bit to do with their verbal IQs. Even really smart physics students have an easier time understanding Feynman's articles and lectures than Schwinger's!

Schwinger had read (and understood) all of the existing literature on quantum mechanics while still a HS student -- this loads on V, not just M. Feynman's development path was different, partially because he had trouble reading other people's papers.

Schwinger was one of the subjects in Anne Roe's study of top scientists. His verbal score was above +4 SD. I think it's extremely unlikely that Feynman would have scored that high.

See links below for more discussion, examples, etc.

Hope you are enjoying Berkeley!

Best,
Steve

Feynman's Cognitive Style

Feynman and the Secret of Magic

Feynman's War

Schwinger meets Rabi

Roe's Scientists

Here are some (accessible) Schwinger quotes I like.
The pressure for conformity is enormous. I have experienced it in editors’ rejection of submitted papers, based on venomous criticism of anonymous referees. The replacement of impartial reviewing by censorship will be the death of science.


Is the purpose of theoretical physics to be no more than a cataloging of all the things that can happen when particles interact with each other and separate? Or is it to be an understanding at a deeper level in which there are things that are not directly observable (as the underlying quantized fields are) but in terms of which we shall have a more fundamental understanding?


To me, the formalism of quantum mechanics is not just mathematics; rather it is a symbolic account of the realities of atomic measurements. That being so, no independent quantum theory of measurement is required -- it is part and parcel of the formalism.

[ ... recapitulates usual von Neumann formulation: unitary evolution of wavefunction under "normal" circumstances; non-unitary collapse due to measurement ... discusses paper hypothesizing stochastic (dynamical) wavefunction collapse ... ]

In my opinion, this is a desperate attempt to solve a non-existent problem, one that flows from a false premise, namely the vN dichotomization of quantum mechanics. Surely physicists can agree that a microscopic measurement is a physical process, to be described as would any physical process, that is distinguished only by the effective irreversibility produced by amplification to the macroscopic level. ...

(See Schwinger on Quantum Foundations ;-)
Schwinger survived both Feynman and Tomonaga, with whom he shared the Nobel prize for quantum electrodynamics. He began his eulogy for Feynman: "I am the last of the triumvirate ..."

Tuesday, September 26, 2017

The Vietnam War, Ken Burns and Lynn Novick



Ken Burns' Vietnam documentary is incredibly good. Possibly the best documentary I've ever seen. It's heartbreaking tragedy, with perspectives from all sides of the conflict: Americans and North and South Vietnamese, soldiers from both sides, war protestors, war planners, families of sons and daughters who died in the war.

I was a child when the war was winding down, so the America of the documentary is very familiar to me.

Here's the PBS web page from which you can stream all 18 hours. I have been watching the version that contains unedited explicit language and content (not broadcasted).

Tuesday, September 19, 2017

Accurate Genomic Prediction Of Human Height

I've been posting preprints on arXiv since its beginning ~25 years ago, and I like to share research results as soon as they are written up. Science functions best through open discussion of new results! After some internal deliberation, my research group decided to post our new paper on genomic prediction of human height on bioRxiv and arXiv.

But the preprint culture is nascent in many areas of science (e.g., biology), and it seems to me that some journals are not yet fully comfortable with the idea. I was pleasantly surprised to learn, just in the last day or two, that most journals now have official policies that allow online distribution of preprints prior to publication. (This has been the case in theoretical physics since before I entered the field!) Let's hope that progress continues.

The work presented below applies ideas from compressed sensing, L1 penalized regression, etc. to genomic prediction. We exploit the phase transition behavior of the LASSO algorithm to construct a good genomic predictor for human height. The results are significant for the following reasons:
We applied novel machine learning methods ("compressed sensing") to ~500k genomes from UK Biobank, resulting in an accurate predictor for human height which uses information from thousands of SNPs.

1. The actual heights of most individuals in our replication tests are within a few cm of their predicted height.

2. The variance captured by the predictor is similar to the estimated GCTA-GREML SNP heritability. Thus, our results resolve the missing heritability problem for common SNPs.

3. Out-of-sample validation on ARIC individuals (a US cohort) shows the predictor works on that population as well. The SNPs activated in the predictor overlap with previous GWAS hits from GIANT.
The scatterplot figure below gives an immediate feel for the accuracy of the predictor.
Accurate Genomic Prediction Of Human Height
(bioRxiv)

Louis Lello, Steven G. Avery, Laurent Tellier, Ana I. Vazquez, Gustavo de los Campos, and Stephen D.H. Hsu

We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, ∼40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate ∼0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction. The variance captured for height is comparable to the estimated SNP heritability from GCTA (GREML) analysis, and seems to be close to its asymptotic value (i.e., as sample size goes to infinity), suggesting that we have captured most of the heritability for the SNPs used. Thus, our results resolve the common SNP portion of the “missing heritability” problem – i.e., the gap between prediction R-squared and SNP heritability. The ∼20k activated SNPs in our height predictor reveal the genetic architecture of human height, at least for common SNPs. Our primary dataset is the UK Biobank cohort, comprised of almost 500k individual genotypes with multiple phenotypes. We also use other datasets and SNPs found in earlier GWAS for out-of-sample validation of our results.
This figure compares predicted and actual height on a validation set of 2000 individuals not used in training: males + females, actual heights (vertical axis) uncorrected for gender. For training we z-score by gender and age (due to Flynn Effect for height). We have also tested validity on a population of US individuals (i.e., out of sample; not from UKBB).


This figure illustrates the phase transition behavior at fixed sample size n and varying penalization lambda.


These are the SNPs activated in the predictor -- about 20k in total, uniformly distributed across all chromosomes; vertical axis is effect size of minor allele:


The big picture implication is that heritable complex traits controlled by thousands of genetic loci can, with enough data and analysis, be predicted from DNA. I expect that with good genotype | phenotype data from a million individuals we could achieve similar success with cognitive ability. We've also analyzed the sample size requirements for disease risk prediction, and they are similar (i.e., ~100 times sparsity of the effects vector; so ~100k cases + controls for a condition affected by ~1000 loci).


Note Added: Further comments in response to various questions about the paper.

1) We have tested the predictor on other ethnic groups and there is an (expected) decrease in correlation that is roughly proportional to the "genetic distance" between the test population and the white/British training population. This is likely due to different LD structure (SNP correlations) in different populations. A SNP which tags the true causal genetic variation in the Euro population may not be a good tag in, e.g., the Chinese population. We may report more on this in the future. Note, despite the reduction in power our predictor still captures more height variance than any other existing model for S. Asians, Chinese, Africans, etc.

2) We did not explore the biology of the activated SNPs because that is not our expertise. GWAS hits found by SSGAC, GIANT, etc. have already been connected to biological processes such as neuronal growth, bone development, etc. Plenty of follow up work remains to be done on the SNPs we discovered.

3) Our initial reduction of candidate SNPs to the top 50k or 100k is simply to save computational resources. The L1 algorithms can handle much larger values of p, but keeping all of those SNPs in the calculation is extremely expensive in CPU time, memory, etc. We tested computational cost vs benefit in improved prediction from including more (>100k) candidate SNPs in the initial cut but found it unfavorable. (Note, we also had a reasonable prior that ~10k SNPs would capture most of the predictive power.)

4) We will have more to say about nonlinear effects, additional out-of-sample tests, other phenotypes, etc. in future work.

5) Perhaps most importantly, we have a useful theoretical framework (compressed sensing) within which to think about complex trait prediction. We can make quantitative estimates for the sample size required to "solve" a particular trait.

I leave you with some remarks from Francis Crick:
Crick had to adjust from the "elegance and deep simplicity" of physics to the "elaborate chemical mechanisms that natural selection had evolved over billions of years." He described this transition as, "almost as if one had to be born again." According to Crick, the experience of learning physics had taught him something important — hubris — and the conviction that since physics was already a success, great advances should also be possible in other sciences such as biology. Crick felt that this attitude encouraged him to be more daring than typical biologists who tended to concern themselves with the daunting problems of biology and not the past successes of physics.

Blog Archive

Labels