What Does It Mean for AI to Understand? | Quanta Magazine

Recevez des mises à jour en temps réel directement sur votre appareil, abonnez-vous maintenant.

Remember IBMs Watson, the AI Jeopardy! champion? A 2010 promotion proclaimed, Watson understands natural language with all its ambiguity and complexity. However, as we saw when Watson subsequently failed spectacularly in its quest to revolutionize medicine with artificial intelligence, a veneer of linguistic facility is not the same as actually comprehending human language.

Natural language understanding has long been a major goal of AI research. At first, researchers tried to manually program everything a machine would need to make sense of news stories, fiction or anything else humans might write. This approach, as Watson showed, was futile its impossible to write down all the unwritten facts, rules and assumptions required for understanding text. More recently, a new paradigm has been established: Instead of building in explicit knowledge, we let machines learn to understand language on their own, simply by ingesting vast amounts of written text and learning to predict words. The result is what researchers call a language model. When based on large neural networks, like OpenAIs GPT-3, such models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning.

But has GPT-3 trained on text from thousands of websites, books and encyclopedias transcended Watsons veneer? Does it really understand the language it generates and ostensibly reasons about? This is a topic of stark disagreement in the AI research community. Such discussions used to be the purview of philosophers, but in the past decade AI has burst out of its academic bubble into the real world, and its lack of understanding of that world can have real and sometimes devastating consequences. In one study, IBMs Watson was found to propose multiple examples of unsafe and incorrect treatment recommendations. Another study showed that Googles machine translation system made significant errors when used to translate medical instructions for non-English-speaking patients.

How can we determine in practice whether a machine can understand? In 1950, the computing pioneer Alan Turing tried to answer this question with his famous imitation game, now called the Turing test. A machine and a human, both hidden from view, would compete to convince a human judge of their humanness using only conversation. If the judge couldnt tell which one was the human, then, Turing asserted, we should consider the machine to be thinking and, in effect, understanding.

Unfortunately, Turing underestimated the propensity of humans to be fooled by machines. Even simple chatbots, such as Joseph Weizenbaums 1960s ersatz psychotherapist Eliza, have fooled people into believing they were conversing with an understanding being, even when they knew that their conversation partner was a machine.

In a 2012 paper, the computer scientists Hector Levesque, Ernest Davis and Leora Morgenstern proposed a more objective test, which they called the Winograd schema challenge. This test has since been adopted in the AI language community as one way, and perhaps the best way, to assess machine understanding though as well see, it is not perfect. A Winograd schema, named for the language researcher Terry Winograd, consists of a pair of sentences, differing by exactly one word, each followed by a question. Here are two examples:

Sentence 1: I poured water from the bottle into the cup until it was full.
Question: What was full, the bottle or the cup?
Sentence 2: I poured water from the bottle into the cup until it was empty.
Question: What was empty, the bottle or the cup?

Sentence 1: Joes uncle can still beat him at tennis, even though he is 30 years older.
Question: Who is older, Joe or Joes uncle?
Sentence 2: Joes uncle can still beat him at tennis, even though he is 30 years younger.
Question: Who is younger, Joe or Joes uncle?

In each sentence pair, the one-word difference can change which thing or person a pronoun refers to. Answering these questions correctly seems to require commonsense understanding. Winograd schemas are designed precisely to test this kind of understanding, alleviating the Turing tests vulnerability to unreliable human judges or chatbot tricks. In particular, the authors designed a few hundred schemas that they believed were Google-proof: A machine shouldnt be able to use a Google search (or anything like it) to answer the questions correctly.

These schemas were the subject of a competition held in 2016 in which the winning program was correct on only 58% of the sentences hardly a better result than if it had guessed. Oren Etzioni, a leading AI researcher, quipped, When AI cant determine what it refers to in a sentence, its hard to believe that it will take over the world.

However, the ability of AI programs to solve Winograd schemas rose quickly due to the advent of large neural network language models. A 2020 paper from OpenAI reported that GPT-3 was correct on nearly 90% of the sentences in a benchmark set of Winograd schemas. Other language models have performed even better after training specifically on these tasks. At the time of this writing, neural network language models have achieved about 97% accuracy on a particular set of Winograd schemas that are part of an AI language-understanding competition known as SuperGLUE. This accuracy roughly equals human performance. Does this mean that neural network language models have attained humanlike understanding?

Not necessarily. Despite the creators best efforts, those Winograd schemas were not actually Google-proof. These challenges, like many other current tests of AI language understanding, sometimes permit shortcuts that allow neural networks to perform well without understanding. For example, consider the sentences The sports car passed the mail truck because it was going faster and The sports car passed the mail truck because it was going slower. A language model trained on a huge corpus of English sentences will have absorbed the correlation between sports car and fast, and between mail truck and slow, and so it can answer correctly based on those correlations alone rather than by drawing on any understanding. It turns out that many of the Winograd schemas in the SuperGLUE competition allow for these kinds of statistical correlations.

Rather than give up on the Winograd schemas as a test of understanding, a group of researchers from the Allen Institute for Artificial Intelligence decided instead to try to fix some of their problems. In 2019 they created WinoGrande, a much larger set of Winograd schemas. Instead of several hundred examples, WinoGrande contains a whopping 44,000 sentences. To obtain that many examples, the researchers turned to Amazon Mechanical Turk, a popular platform for crowdsourcing work. Each (human) worker was asked to write several sentence pairs, with some constraints to ensure that the collection would contain diverse topics, though now the sentences in each pair could differ by more than one word.

The researchers then attempted to eliminate sentences that could allow statistical shortcuts by applying a relatively unsophisticated AI method to each sentence and discarding any that were too easily solved. As expected, the remaining sentences presented a much harder challenge for machines than the original Winograd schema collection. While humans still scored very high, neural network language models that had matched human performance on the original set scored much lower on the WinoGrande set. This new challenge seemed to redeem Winograd schemas as a test for commonsense understanding as long as the sentences were carefully screened to ensure that they were Google-proof.

However, another surprise was in store. In the almost two years since the WinoGrande collection was published, neural network language models have grown ever larger, and the larger they get, the better they seem to score on this new challenge. At the time of this writing, the current best programs which have been trained on terabytes of text and then further trained on thousands of WinoGrande examples get close to 90% correct (humans get about 94% correct). This increase in performance is due almost entirely to the increased size of the neural network language models and their training data.

Have these ever larger networks finally attained humanlike commonsense understanding? Again, its not likely. The WinoGrande results come with some important caveats. For example, because the sentences relied on Amazon Mechanical Turk workers, the quality and coherence of the writing is quite uneven. Also, the unsophisticated AI method used to weed out non-Google-proof sentences may have been too unsophisticated to spot all possible statistical shortcuts available to a huge neural network, and it only applied to individual sentences, so some of the remaining sentences ended up losing their twin. One follow-up study showed that neural network language models tested on twin sentences only and required to be correct on both are much less accurate than humans, showing that the earlier 90% result is less significant than it seemed.

So, what to make of the Winograd saga? The main lesson is that it is often hard to determine from their performance on a given challenge if AI systems truly understand the language (or other data) that they process. We now know that neural networks often use statistical shortcuts instead of actually demonstrating humanlike understanding to obtain high performance on the Winograd schemas as well as many of the most popular general language understanding benchmarks.

The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding. Consider what it means to understand The sports car passed the mail truck because it was going slower. You need to know what sports cars and mail trucks are, that cars can pass one another, and, at an even more basic level, that vehicles are objects that exist and interact in the world, driven by humans with their own agendas.

All this is knowledge that we humans take for granted, but its not built into machines or likely to be explicitly written down in any of a language models training text. Some cognitive scientists have argued that humans rely on innate, pre-linguistic core knowledge of space, time and many other essential properties of the world in order to learn and understand language. If we want machines to similarly master human language, we will need to first endow them with the primordial principles humans are born with. And to assess machines understanding, we should start by assessing their grasp of these principles, which one might call infant metaphysics.

Training and evaluating machines for baby-level intelligence may seem like a giant step backward compared to the prodigious feats of AI systems like Watson and GPT-3. But if true and trustworthy understanding is the goal, this may be the only path to machines that can genuinely comprehend what it refers to in a sentence, and everything else that understanding it entails.

rnnn","settings":{"socialLinks":[{"type":"facebook","label":"Facebook","url":"https://www.facebook.com/QuantaNews","__typename":"SocialMediaLink"},{"type":"twitter","label":"Twitter","url":"https://twitter.com/QuantaMagazine","__typename":"SocialMediaLink"},{"type":"youtube","label":"YouTube","url":"http://youtube.com/c/QuantamagazineOrgNews","__typename":"SocialMediaLink"},{"type":"instagram","label":"Instagram","url":"https://instagram.com/quantamag","__typename":"SocialMediaLink"},{"type":"rss","label":"RSS","url":"https://api.quantamagazine.org/feed/","__typename":"SocialMediaLink"}],"newsletterAction":"https://quantamagazine.us1.list-manage.com/subscribe/post?u=0d6ddf7dc1a0b7297c8e06618&id=f0cb61321c","newsletterUrl":"http://us1.campaign-archive2.com/home/?u=0d6ddf7dc1a0b7297c8e06618&id=f0cb61321c","sfNotice":"An editorially independent publication supported by the Simons Foundation.","commentsHeader":"

n","itunesSubscribe":"https://itunes.apple.com/us/podcast/quanta-science-podcast/id1021340531?mt=2&ls=1","androidSubscribe":"https://podcasts.google.com/feed/aHR0cHM6Ly93d3cucXVhbnRhbWFnYXppbmUub3JnL2ZlZWQvcG9kY2FzdC8","spotifySubscribe":"https://open.spotify.com/show/7oKXOpbHzbICFUcJNbZ5wF","itunesJoyOfX":"https://podcasts.apple.com/us/podcast/the-joy-of-x/id1495067186","androidJoyOfX":"https://podcasts.google.com/feed/aHR0cHM6Ly9hcGkucXVhbnRhbWFnYXppbmUub3JnL2ZlZWQvdGhlLWpveS1vZi14Lw","spotifyJoyOfX":"https://open.spotify.com/show/5HcCtKPH5gnOjRiMtTdC07","popularSearches":[{"term":"math","label":"Mathematics","__typename":"PopularSearch"},{"term":"physics","label":"Physics","__typename":"PopularSearch"},{"term":"black holes","label":"Black Holes","__typename":"PopularSearch"},{"term":"evolution","label":"Evolution","__typename":"PopularSearch"}],"searchTopics":[{"type":"Tag","label":"Podcasts","tag":{"name":"podcast","slug":"podcast","term_id":"552","__typename":"Term"},"category":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"__typename":"SearchTopic"},{"type":"Tag","label":"Columns","tag":{"name":"Quantized Columns","slug":"quantized","term_id":"551","__typename":"Term"},"category":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"__typename":"SearchTopic"},{"type":"Series","label":"Series","tag":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"category":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"__typename":"SearchTopic"},{"type":"Category","label":"Interviews","tag":{"name":"Q&A","slug":"qa","term_id":"567","__typename":"Term"},"category":{"name":"Q&A","slug":"qa","term_id":"176","__typename":"Term"},"__typename":"SearchTopic"},{"type":"Category","label":"Multimedia","tag":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"category":{"name":"Multimedia","slug":"multimedia","term_id":"43","__typename":"Term"},"__typename":"SearchTopic"},{"type":"Category","label":"Puzzles","tag":{"name":"puzzles","slug":"puzzles","term_id":"542","__typename":"Term"},"category":{"name":"Puzzles","slug":"puzzles","term_id":"546","__typename":"Term"},"__typename":"SearchTopic"},{"type":"Category","label":"Blog Posts","tag":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"category":{"name":"Abstractions blog","slug":"abstractions","term_id":"619","__typename":"Term"},"__typename":"SearchTopic"},{"type":"news","label":"News Articles","tag":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"category":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"__typename":"SearchTopic"},{"type":"videos","label":"Videos","tag":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"category":{"name":null,"slug":null,"term_id":null,"__typename":"Term"},"__typename":"SearchTopic"}],"searchSections":[{"name":"Mathematics","slug":"mathematics","term_id":"188","__typename":"Term"},{"name":"Physics","slug":"physics","term_id":"189","__typename":"Term"},{"name":"Biology","slug":"biology","term_id":"191","__typename":"Term"},{"name":"Computer Science","slug":"computer-science","term_id":"190","__typename":"Term"}],"searchAuthors":[{"id":"38171","name":"Adam Becker","__typename":"AuthorList"},{"id":"28087","name":"Adam Mann","__typename":"AuthorList"},{"id":"29794","name":"Alex Kontorovich","__typename":"AuthorList"},{"id":"39302","name":"Alexander Hellemans","__typename":"AuthorList"},{"id":"56","name":"Alla Katsnelson","__typename":"AuthorList"},{"id":"29458","name":"Allison Whitten","__typename":"AuthorList"},{"id":"73","name":"Amanda Gefter","__typename":"AuthorList"},{"id":"39164","name":"Ana Kova","__typename":"AuthorList"},{"id":"59","name":"Andreas von Bubnoff","__typename":"AuthorList"},{"id":"8728","name":"Anil Ananthaswamy","__typename":"AuthorList"},{"id":"11648","name":"Ann Finkbeiner","__typename":"AuthorList"},{"id":"95","name":"Ariel Bleicher","__typename":"AuthorList"},{"id":"15493","name":"Ashley Smart","__typename":"AuthorList"},{"id":"450","name":"Ashley Yeager","__typename":"AuthorList"},{"id":"36490","name":"Ben Brubaker","__typename":"AuthorList"},{"id":"16315","name":"Bill Andrews","__typename":"AuthorList"},{"id":"2752","name":"Bob Henderson","__typename":"AuthorList"},{"id":"15492","name":"Brendan Z. Foster","__typename":"AuthorList"},{"id":"68","name":"Brooke Borel","__typename":"AuthorList"},{"id":"62","name":"Carl Zimmer","__typename":"AuthorList"},{"id":"13691","name":"Caroline Lee","__typename":"AuthorList"},{"id":"13684","name":"Caroline Lee","__typename":"AuthorList"},{"id":"50","name":"Carrie Arnold","__typename":"AuthorList"},{"id":"15142","name":"Chanda Prescod-Weinstein","__typename":"AuthorList"},{"id":"8084","name":"Charlie Wood","__typename":"AuthorList"},{"id":"742","name":"Christie Wilcox","__typename":"AuthorList"},{"id":"11543","name":"Claudia Dreifus","__typename":"AuthorList"},{"id":"57","name":"Courtney Humphries","__typename":"AuthorList"},{"id":"7262","name":"Dalmeet Singh Chawla","__typename":"AuthorList"},{"id":"70","name":"Dan Falk","__typename":"AuthorList"},{"id":"19918","name":"Dana Najjar","__typename":"AuthorList"},{"id":"13695","name":"Daniel Garisto","__typename":"AuthorList"},{"id":"32676","name":"Daniel S. Freed","__typename":"AuthorList"},{"id":"13724","name":"David H. Freedman","__typename":"AuthorList"},{"id":"26310","name":"David S. Richeson","__typename":"AuthorList"},{"id":"30207","name":"David Tse","__typename":"AuthorList"},{"id":"19266","name":"Devin Powell","__typename":"AuthorList"},{"id":"13251","name":"Diana Kwon","__typename":"AuthorList"},{"id":"17000","name":"Elena Renken","__typename":"AuthorList"},{"id":"17149","name":"Elizabeth Landau","__typename":"AuthorList"},{"id":"5279","name":"Elizabeth Preston","__typename":"AuthorList"},{"id":"58","name":"Elizabeth Svoboda","__typename":"AuthorList"},{"id":"32612","name":"Ellen Horne","__typename":"AuthorList"},{"id":"27534","name":"Emily Buder","__typename":"AuthorList"},{"id":"25173","name":"Emily Levesque","__typename":"AuthorList"},{"id":"64","name":"Emily Singer","__typename":"AuthorList"},{"id":"47","name":"Erica Klarreich","__typename":"AuthorList"},{"id":"14784","name":"Erika K. Carlson","__typename":"AuthorList"},{"id":"98","name":"Esther Landhuis","__typename":"AuthorList"},{"id":"5830","name":"Eva Silverstein","__typename":"AuthorList"},{"id":"6793","name":"Evelyn Lamb","__typename":"AuthorList"},{"id":"75","name":"Ferris Jabr","__typename":"AuthorList"},{"id":"52","name":"Frank Wilczek","__typename":"AuthorList"},{"id":"69","name":"Gabriel Popkin","__typename":"AuthorList"},{"id":"77","name":"George Musser","__typename":"AuthorList"},{"id":"19092","name":"Grant Sanderson","__typename":"AuthorList"},{"id":"20557","name":"Howard Lee","__typename":"AuthorList"},{"id":"66","name":"Ingrid Daubechies","__typename":"AuthorList"},{"id":"85","name":"Ivan Amato","__typename":"AuthorList"},{"id":"37141","name":"Jake Buehler","__typename":"AuthorList"},{"id":"12170","name":"Janna Levin","__typename":"AuthorList"},{"id":"32","name":"Jeanette Kazmierczak","__typename":"AuthorList"},{"id":"51","name":"Jennifer Ouellette","__typename":"AuthorList"},{"id":"72","name":"John Pavlus","__typename":"AuthorList"},{"id":"16475","name":"John Preskill","__typename":"AuthorList"},{"id":"91","name":"John Rennie","__typename":"AuthorList"},{"id":"10351","name":"Jonathan Lambert","__typename":"AuthorList"},{"id":"31716","name":"Jonathan O'Callaghan","__typename":"AuthorList"},{"id":"1241","name":"Jordana Cepelewicz","__typename":"AuthorList"},{"id":"8463","name":"Joshua Roebke","__typename":"AuthorList"},{"id":"49","name":"Joshua Sokol","__typename":"AuthorList"},{"id":"16815","name":"jye","__typename":"AuthorList"},{"id":"67","name":"K.C. Cole","__typename":"AuthorList"},{"id":"37462","name":"Karmela Padavic-Callaghan","__typename":"AuthorList"},{"id":"87","name":"Kat McGowan","__typename":"AuthorList"},{"id":"36139","name":"Katarina Zimmer","__typename":"AuthorList"},{"id":"20556","name":"Katherine Harmon Courage","__typename":"AuthorList"},{"id":"90","name":"Katia Moskvitch","__typename":"AuthorList"},{"id":"39551","name":"Katie McCormick","__typename":"AuthorList"},{"id":"27374","name":"Kelsey Houston-Edwards","__typename":"AuthorList"},{"id":"40","name":"Kevin Hartnett","__typename":"AuthorList"},{"id":"38413","name":"Lakshmi Chandrasekaran","__typename":"AuthorList"},{"id":"12570","name":"Laura Poppick","__typename":"AuthorList"},{"id":"38699","name":"Leila Sloman","__typename":"AuthorList"},{"id":"23451","name":"Liam Drew","__typename":"AuthorList"},{"id":"79","name":"Liz Kruesi","__typename":"AuthorList"},{"id":"38","name":"Lucy Reading-Ikkanda","__typename":"AuthorList"},{"id":"60","name":"Maggie McKee","__typename":"AuthorList"},{"id":"2333","name":"Mallory Locklear","__typename":"AuthorList"},{"id":"3569","name":"Marcus Woo","__typename":"AuthorList"},{"id":"414","name":"Mark Kim-Mulgrew","__typename":"AuthorList"},{"id":"20495","name":"Matt Carlstrom","__typename":"AuthorList"},{"id":"17147","name":"Matthew Hutson","__typename":"AuthorList"},{"id":"30953","name":"Max G. Levy","__typename":"AuthorList"},{"id":"32437","name":"Max Kozlov","__typename":"AuthorList"},{"id":"40613","name":"Melanie Mitchell","__typename":"AuthorList"},{"id":"7186","name":"Melinda Wenner Moyer","__typename":"AuthorList"},{"id":"14093","name":"Michael Harris","__typename":"AuthorList"},{"id":"34","name":"Michael Kranz","__typename":"AuthorList"},{"id":"23","name":"Michael Moyer","__typename":"AuthorList"},{"id":"74","name":"Michael Nielsen","__typename":"AuthorList"},{"id":"19093","name":"Michele Bannister","__typename":"AuthorList"},{"id":"1472","name":"Moira Chas","__typename":"AuthorList"},{"id":"6476","name":"Monique Brouillette","__typename":"AuthorList"},{"id":"35407","name":"Mordechai Rorvig","__typename":"AuthorList"},{"id":"10","name":"Natalie Wolchover","__typename":"AuthorList"},{"id":"37605","name":"Nick Thieme","__typename":"AuthorList"},{"id":"37428","name":"Nima Arkani-Hamed","__typename":"AuthorList"},{"id":"19962","name":"Nola Taylor Redd","__typename":"AuthorList"},{"id":"24","name":"Olena Shmahalo","__typename":"AuthorList"},{"id":"1816","name":"Patrick Honner","__typename":"AuthorList"},{"id":"84","name":"Peter Byrne","__typename":"AuthorList"},{"id":"55","name":"Philip Ball","__typename":"AuthorList"},{"id":"31","name":"Pradeep Mutalik","__typename":"AuthorList"},{"id":"24011","name":"Puja Changoiwala","__typename":"AuthorList"},{"id":"100","name":"Quanta Magazine","__typename":"AuthorList"},{"id":"2784","name":"R. Douglas Fields","__typename":"AuthorList"},{"id":"26114","name":"Rachel Crowell","__typename":"AuthorList"},{"id":"9412","name":"Raleigh McElvery","__typename":"AuthorList"},{"id":"820","name":"Ramin Skibba","__typename":"AuthorList"},{"id":"1666","name":"Rebecca Boyle","__typename":"AuthorList"},{"id":"20950","name":"Richard Masland","__typename":"AuthorList"},{"id":"48","name":"Robbert Dijkgraaf","__typename":"AuthorList"},{"id":"80","name":"Roberta Kwok","__typename":"AuthorList"},{"id":"15681","name":"Robin George Andrews","__typename":"AuthorList"},{"id":"24577","name":"Rodrigo Prez Ortega","__typename":"AuthorList"},{"id":"78","name":"Sabine Hossenfelder","__typename":"AuthorList"},{"id":"83","name":"Sarah Lewin","__typename":"AuthorList"},{"id":"35441","name":"Scott Aaronson","__typename":"AuthorList"},{"id":"76","name":"Sean B. Carroll","__typename":"AuthorList"},{"id":"15680","name":"Sean Carroll","__typename":"AuthorList"},{"id":"7239","name":"Shannon Hall","__typename":"AuthorList"},{"id":"65","name":"Siobhan Roberts","__typename":"AuthorList"},{"id":"5944","name":"Sophia Chen","__typename":"AuthorList"},{"id":"61","name":"Steph Yin","__typename":"AuthorList"},{"id":"63","name":"Stephanie Bucklin","__typename":"AuthorList"},{"id":"26311","name":"Stephanie DeMarco","__typename":"AuthorList"},{"id":"71","name":"Stephen Ornes","__typename":"AuthorList"},{"id":"17148","name":"Steve Nadis","__typename":"AuthorList"},{"id":"13356","name":"Steven Strogatz","__typename":"AuthorList"},{"id":"17150","name":"Susan D'Agostino","__typename":"AuthorList"},{"id":"39768","name":"Tamar Lichter Blanks","__typename":"AuthorList"},{"id":"2960","name":"Tara C. Smith","__typename":"AuthorList"},{"id":"14785","name":"Thomas Lewton","__typename":"AuthorList"},{"id":"3","name":"Thomas Lin","__typename":"AuthorList"},{"id":"54","name":"Tim Vernimmen","__typename":"AuthorList"},{"id":"88","name":"Tom Siegfried","__typename":"AuthorList"},{"id":"12964","name":"Vanessa Schipani","__typename":"AuthorList"},{"id":"53","name":"Veronique Greenwood","__typename":"AuthorList"},{"id":"86","name":"Virginia Hughes","__typename":"AuthorList"},{"id":"3244","name":"Viviane Callier","__typename":"AuthorList"},{"id":"89","name":"Wynne Parry","__typename":"AuthorList"},{"id":"15913","name":"XiaoZhi Lim","__typename":"AuthorList"}],"adBehavior":"everywhere","adUrl":"https://www.youtube.com/c/QuantamagazineOrgNews","adAlt":"Watch and Learn ","adImageHome":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/05/Youtube_Web-Default_260.gif","adImageArticle":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/05/Youtube_Article_160.gif","adImageTablet":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/05/Youtube_Tablet_890_op.gif","adImageMobile":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2021/05/Youtube_Web-Default_260.gif","trackingScripts":"rnrn"},"theme":{"page":{"accent":"#ff8600","text":"#1a1a1a","background":"white"},"header":{"type":"default","gradient":{"color":"white"},"solid":{"primary":"#1a1a1a","secondary":"#999999","hover":"#ff8600"},"transparent":{"primary":"white","secondary":"white","hover":"#ff8600"}}},"redirect":null,"fallbackImage":{"alt":"","caption":"","url":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2017/04/default.gif","width":1200,"height":600,"sizes":{"thumbnail":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2017/04/default-520x260.gif","square_small":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2017/04/default-160x160.gif","square_large":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2017/04/default-520x520.gif","medium":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2017/04/default.gif","medium_large":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2017/04/default-768x384.gif","large":"https://d2r55xnwy6nx47.cloudfront.net/uploads/2017/04/default.gif","__typename":"ImageSizes"},"__typename":"Image"}},"modals":{"loginModal":false,"signUpModal":false,"forgotPasswordModal":false,"resetPasswordModal":false,"lightboxModal":false,"callback":null,"props":null},"podcast":{"id":null,"playing":false,"duration":0,"currentTime":0},"user":{"loggedIn":false,"savedArticleIDs":[],"userEmail":"","editor":false},"comments":{"open":false},"cookies":{"acceptedCookie":false}},
env: {
APP_URL: 'https://www.quantamagazine.org',
NODE_ENV: 'production',
WP_URL: 'https://api.quantamagazine.org',
HAS_GOOGLE_ID: true,
HAS_FACEBOOK_ID: true,
},
}

www.actusduweb.com
Suivez Actusduweb sur Google News


Recevez des mises à jour en temps réel directement sur votre appareil, abonnez-vous maintenant.

commentaires

Ce site utilise des cookies pour améliorer votre expérience. Nous supposerons que cela vous convient, mais vous pouvez vous désinscrire si vous le souhaitez. J'accepte Lire la suite