In the clip above, fitness trainer Justin Agustin explains a popular military technique to train yourself to fall asleep in two minutes. It’s not unlike a body scan meditation to systematically relax yourself. From Yahoo! News:
When the mind starts to wander, repeating the phrase “Don’t think” for 10 seconds can help one fall back into deep breathing and peaceful visualization, (Agustin says).
Agustin explained to his viewers that they must practice the technique every night for six weeks. He claimed that 96% of people who mastered the military trick are able to fall asleep within two minutes after closing their eyes.
OpenAI co-founder Sam Altman ridicules start-up fundraising process
PUBLISHED THU, JAN 27 20228:25 AM ESTUPDATED THU, JAN 27 202211:07 AM ESTSam Shead@SAM_L_SHEADSHAREShare Article via FacebookShare Article via TwitterShare Article via LinkedInShare Article via EmailKEY POINTS
Sam Altman, the former president of start-up accelerator Y-Combinator and the co-founder of the OpenAI research lab, has ridiculed the start-up fundraising process.
The tech entrepreneur joked that OpenAI has raised a $250 million series A funding round on the back of six other rounds.
Sam Altman, co-founder and chief executive officer of OpenAI Inc., speaks during TechCrunch Disrupt 2019 in San Francisco, California, on Thursday, Oct. 3, 2019.David Paul Morris | Bloomberg | Getty Images
The CEO of Microsoft-backed OpenAI, the artificial intelligence lab that competes with Alphabet’s DeepMind and Meta AI, has taken aim at the start-up fundraising process.
Sam Altman, OpenAI’s co-founder and CEO, joked on Twitter that OpenAI has raised a $250 million series A funding round on the back of six other rounds.
“After our pre-friends-and-family round in 2016, our F&F round in 2017, our angel round in 2018, our pre-seed round in 2019, our seed round in 2020, and our seed extension in 2021, we’re delighted to share we’ve raised a Series A of $250 million,” Altman wrote late Wednesday. “Humbled by such a strong start.”
OpenAI has in fact only completed two funding rounds, according to start-up tracker Crunchbase.
While start-ups used to raise a seed round before going on to raise series A, B, C and so on rounds, many are now choosing to do additional rounds in between.
OpenAI, ranked by AI researchers as one of the top three AI labs worldwide, did not immediately respond to a CNBC request for comment.
Incorporated in San Francisco in 2015, OpenAI says it is trying to develop safe and friendly AI systems.
The company’s founders — Altman, Tesla CEO Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba and John Schulman — pledged to invest over $1 billion into the venture when they set it up. Musk resigned from the board in February 2018 but remained a donor.
Altman believes AI has a lot further to go, however. Indeed, he expects AI to generate enough wealth to pay every adult in the U.S. $13,500 a year in as little as 10 years from now.
“My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe,” said Altman, the former president of renowned start-up accelerator Y-Combinator, in a blog post last year. “Software that can think and learn will do more and more of the work that people now do.”
Whether you’re a strict vegetarian or are just looking to cut down on your meat consumption, McDonald’s will soon have something new for you to sink your teeth into.
In November, the chain began a test run of its plant-based burger, the McPlant, at eight select restaurants across the country—and the patty’s performance has positively surprised market analysts.Classic NYC Burgers
Throughout the past month, participating locations sold about 70 McPlants per day. In comparison, the average McDonald’s restaurant typically sells about 110 Big Macs each day, according to CNBC.
Market analyst Michael Lavery wrote in a note to clients on Tuesday that the test locations were selling about three times more McPlants than he initially expected. That’s a pretty huge win for those advocating for more plant-based options at fast-food restaurants.
Although the sample size for the McPlant is currently small, Lavery said that consumers’ early overall interest and willingness to try the burger was greater than expected.
“While we believe test stores likely received a lift from exclusivity (drawing some sales from nearby stores that did not offer it), and that sustainable, repeat sales will settle in at a much lower rate; initial McPlant sales could prove to be closer to 8% to 10% of burger sales, or $170 million to $215 million (annualized),” he wrote.
Courtesy of McDonald’s
The burger, which was crafted in conjunction with Beyond Meat, has 510 calories and contains 22 grams of protein. The meat-free patty is made from ingredients like peas, rice, and potatoes and served on a sesame seed bun with all the traditional burger toppings.
Unlike its main competitor, McDonald’s has been slower to fully hop into the plant-based meat trend. Burger King introduced the meatless Impossible Whopper more than two years ago.
Vegetarian and vegan options continue to be areas of growth for the fast-food industry. Earlier this month, Chipotle debuted its plant-based chorizo nationwide, and Taco Bell continues to offer an array of meat alternatives like black beans and potatoes on its menu.
For now, the McPlant is available for a limited time at restaurants in Irving and Carrollton, Texas; Cedar Falls, Iowa; Jennings and Lake Charles, Louisiana; and El Segundo and Manhattan Beach, California. Starting February 14, McDonald’s will roll out the McPlant at roughly 600 locations in the Dallas-Fort Worth and San Francisco Bay areas.
Can deep sleep help devastating brain disorders? Scientists studying Parkinson’s want to find out
by Delthia Ricks , Medical Xpress
Credit: Unsplash/CC0 Public Domain
Sleep may be one of the most potent medicines for the brain, scientists are discovering, as they explore the inner labyrinths of the three-pound organ during deep sleep and dream cycles in both health and disease.
Among the most active areas of sleep research are those riveted on brain disorders in which insomnia and other abnormal sleep patterns play a role. Parkinson’s and Alzheimer’s are among the major neurodegenerative conditions affected by poor sleep patterns, and scientists say in the not-too-distant future those conditions—and possibly others—may benefit from newly emerging studies in sleep research.
In Switzerland, Dr. Marta Morawska, a neuroscientist at University Hospital of Zurich, is attempting to peel away some of the mysteries underlying disturbed sleep patterns and the progressive decline of Parkinson’s disease.
“Sleep disturbances have been shown to occur and to contribute to several neurodegenerative diseases, including Parkinson’s disease,” Morawska wrote in Science Translational Medicine. “In particular, slow-wave sleep alterations show correlation with Parkinson’s disease symptoms and progression.”
Slow wave sleep is the period of non-rapid eye movement, NREM sleep, which is characterized by high amplitude, low-frequency brain waves. During NREM, also known as Phase 3 sleep, delta waves dominate when patients undergo electroencephalography, or EEG. Slow-wave sleep is the deepest, most restful stage of sleep and is believed to be the period when memory consolidation occurs.
For decades, a large body of research had long established that sleep is essential for the healthy brain, improving concentration and mood while awake. During the four stages of the sleep cycle, the body reboots and replenishes. Immune forces are bolstered, hormones are released that repair cells and control the metabolic rate. Blood pressure rises and falls as cardiovascular health is fine-tuned.
Yet, while it’s well known that sleep aids the healthy brain, only now are scientists gaining more insight into sleep—and the lack of it—in serious neurodegenerative disorders.
Parkinson’s disease is a progressive nervous system disorder that affects movement and gait. More than 10 million people worldwide are living with the condition, according to the Parkinson’s Foundation, based in Miami, Florida.
The disorder is characterized by tremor, muscular rigidity and slow movement, its incidence increasing as people age. Within the brain itself, the disorder is marked by the degeneration of the basal ganglia and a deficiency of the neurotransmitter dopamine.
Sleep deprivation, a major problem in Parkinson’s disease, can increase deposits of alpha-synuclein, say Morawska and her Zurich collaborators, referring to aggregates of a toxic protein associated with Parkinson’s disease and other neurodegenerative disorders.
Although alpha-synuclein plays a critical role in the healthy brain by actively inhibiting neurotransmitters when they are overexpressed, in Parkinson’s, aggregates of the protein collect in the brain. These large deposits of alpha-synuclein are known as Lewy bodies, which are hallmarks not only of Parkinson’s disease but also of another disorder, Lewy body dementia. In late-stage Parkinson’s disease, dementia can occur.
To test their basic hypothesis—that increasing the amount of slow-wave sleep may have a beneficial impact on the Parkinson’s brain—Morawska and her team turned to two mouse models. The team examined sleep deprivation and its impact on the accumulation of the neurotoxic aggregates. The researchers also investigated a way to increase slow-wave sleep in their animal models.
“Sleep deprivation increased brain alpha-synuclein aggregates,” Morawska asserted. “Enhancing slow-wave sleep with sodium oxybate reduced the alpha-synuclein burden, possibly by increasing glymphatic function and modulating protein homeostasis.”
Glymphatic function, known interchangeably as the glymphatic system and the glial-dependent waste clearance pathway, refers to the brain’s ability to eliminate toxic proteins and other debris during sleep. Soluble waste proteins are eliminated during the NREM phase of sleep, which may explain why the Zurich team hypothesizes that extending deep sleep may slow down Parkinson’s progression.
The new Swiss research joins studies from elsewhere in Europe as well as the United States that are investigating the role of irregular sleep patterns on the other neurodegenerative disorders. An active area of research is focusing on disrupted sleep patterns and Alzheimer’s disease.
In the International Journal of Science two years ago, researchers in Spain posed an intriguing scientific question: Is disruptive sleep a cause or consequence of Alzheimer’s? The research emphasized an important point: “Affected brain structures in people with disturbed sleep coincide with vulnerable areas in Alzehimer’s disease.”
Another critical area of research involves sleep disturbances common in chronic traumatic encephalopathy, or CTE. The condition is linked with repeated head injuries, such as trauma sustained in boxing, football and other aggressive sports. People with CTE have sleep-related problems that cause them to move their limbs and shout during the dream phase of the sleep cycle.
As with Parkinson’s disease, people with CTE are at increased risk for alpha-synuclein aggregates in the brain. CTE patients are also at increased risk for Lewy body dementia.
Morawska and her collaborators, meanwhile, suggest their laboratory research may lead to clinical studies involving Parkinson’s patients. “The results suggest that sleep plays an important role in Parkinson’s disease pathophysiology and that manipulating slow-wave sleep might be therapeutic in patients with Parkinson’s.”
More information: Marta M. Morawska, et al, Slow-wave sleep affects synucleinopathy and regulates proteostatic processes in mouse models of Parkinson’s disease, Science Translational Medicine, (2021) DOI: 10.1126/scitranslmed.abe7099Journal information:Science Translational Medicine
Grace O’Donnell·Assistant EditorThu, January 27, 2022, 1:39 PM·3 min readIn this article:
TSLA+2.08%
Explore the topics mentioned in this article
Tesla (TSLA) CEO Elon Musk raised eyebrows on Wednesday during the electric vehicle maker’s quarterly earnings call when he touted the development of the Optimus humanoid robot.
“I think that, actually, the most important product development we’re doing this year is actually the Optimus humanoid robot,” Musk said. “This, I think, has the potential to be more significant than the vehicle business over time.”
Musk unveiled the concept for the Optimus humanoid robot last August, which entailed a person in a skin suit performing a dance routine on-stage. At 5-foot-8 inches tall and 125 pounds, it’s intended to do menial labor tasks and would likely be deployed in Tesla factories.
“The foundation of the economy is labor,” Musk said Wednesday. “Capital equipment is distilled labor. So what happens if you don’t actually have a labor shortage? I’m not sure what an economy even means at that point. That’s what Optimus is about.”
Tesla’s concept for its Optimus bot. (Tesla)
Though some see the robot as another distraction being touted by Tesla’s enigmatic CEO, some analysts agreed that a functional factory robot could eventually relive labor demands involved in building cars.
“We’re in a very tight labor market right now across the economy and across our entire coverage universe,” Oppenheimer Senior Research Analyst Colin Rusch said on Yahoo Finance Live (video above). “And so anywhere where we’re seeing savings around labor rates or, you know, just efficiency within operations, there’s a lot of value release.”
Wedbush Managing Director of Equity Research Dan Ives echoed the sentiment, telling Yahoo Finance in an email: “Robots will be in the Tesla factory over the coming years — a real business use with labor shortages abound and growing.”
Rusch added that “there’s an enormous opportunity around automation. We’re continuing to do an awful lot of work in our research franchise around this. And the form factor with the robot is interesting. I think they’re very early. Obviously, Elon likes to talk about things early in the cycle. You know, and so there’s a lot of work to do here.”This content is not available due to your privacy preferences.Update your settings here to see it.
‘The company is really built on engineering and innovation’
“The company is really built on engineering and innovation,” Rusch noted. “And the learning that they’ve done in and around cars really has applicability in other areas. And it’s interesting to see them move into that form factor. We’re certainly not giving them any credit or any value around that robot.”
And while “there was probably some disappointment from certain investors that they were talking about it just as a potential distraction for the organization or on their core business,” Rusch added, robots and automation are “important technology. We think it’s going to become a bigger and bigger issue as we get into the back half of the decade. And we’ll see what they come up with.”
“If you look at the fundamentals, and the profitability, and the story at Tesla, I think the golden goose for them continues to be going after the EV market and gaining more and more of that advantage,” Ives said on Yahoo Finance Live after Tesla reported earnings.
Furthermore, while the Optimus robot won’t be reducing labor costs anytime soon, Tesla seems focused focused on reducing costs in other areas.
“It’s important to think about what’s going to drive earnings for this company over the next few years, and a lot that is really related to cost,” Rusch said, citing Tesla’s simplification of car design and improvements in production. “They’re continuing to swim out in front of folks in terms of designing cost out of their system. We think that serves them awfully well over time.”
Volkswagen announced the official start of series production of the Volkswagen ID.5 model (and its all-wheel drive ID.5 GTX version) in Zwickau in western Saxony, Germany.
This marks the completion of Volkswagen’s transformation of its Zwickau plant, which is considered the first large-scale facility of any volume manufacturer worldwide to switch over all production from internal combustion engine vehicles to electric vehicles.
The switch from 100% ICE to 100% BEVs (six models) took about 26 months. The potential manufacturing capacity of six models is 330,000 per year. Since 2018, around €1.2 billion ($1.3 billion) has been spent on the conversion of the plant.
According to the press release, in 2021, the company produced roughly 180,000 units at Zwickau and in the much smaller Transparent Factory (ID.3) in Dresden (Gläserne Manufaktur Dresden). Of course, the goal for 2022 is to increase production.
In 2022, Volkswagen will launch electric car production (MEB-based) at three more plants:
According to the manufacturer, the total global Volkswagen (brand) all-electric manufacturing capacity is expected to reach 1.2 million, counting Europe, the U.S. and China.
Dr. Christian Vollmer, Member of the Board of Management of Volkswagen Brand responsible for Production:
“Volkswagen will continue to increase the pace of e-mobility in 2022 with its ACCELERATE strategy and the expansion of the model portfolio. The Zwickau production plant has paved the way for the Group to do this with six ramp-ups from three brands in just 26 months. The knowledge and experience gained will help us to continue to electrify our production network quickly and efficiently.”
Dr. Stefan Loth, Chairman of the Board of Volkswagen Saxony:
“After Gläserne Manufaktur Dresden, we have now converted a second Volkswagen factory in Saxony to dedicated electric vehicle production. The start of production of the ID.5 and ID.5 GTX marks the successful transformation of the Zwickau plant on the product side. Our focus now – depending on how the semiconductor situation pans out – will be on achieving full capacity. This year we aim to exceed the 180,000 vehicles Volkswagen Saxony built in 2021.”
The Volkswagen ID.5 starts in Germany from €46,515.
Volkswagen ID.5 specs:
Three motor options, one battery.
Volkswagen ID.5 Pro specs:
WLTP range: up to 520 km (323 miles)
82 kWh battery (77 kWh net usable)
0-100 km/h (62 mph) in 10.4 seconds
top speed of 160 km/h (99 mph)
single motor rear-wheel drive
peak system output of 128 kW (174 PS) 128 kW permanently excited synchronous machine in the rear
AC charging (on-board): 11 kW (three-phase)
DC fast charging: up to 135 kW can replenish up to or 390 km (242 miles) in around 30 minutes
Dimensions: Length in mm: 4,599 Width in mm: 1,852 Height in mm: 1,613 Wheelbase in mm: 2,766
Drag Coefficient 0.26 Cd
Volkswagen ID.5 Pro Performance specs:
WLTP range: up to 520 km (323 miles)
82 kWh battery (77 kWh net usable)
0-100 km/h (62 mph) in 8.4 seconds
top speed of 160 km/h (99 mph)
single motor rear-wheel drive
peak system output of 150 kW (204 PS) 150 kW permanently excited synchronous machine in the rear
AC charging (on-board): 11 kW (three-phase)
DC fast charging: up to 135 kW can replenish up to or 390 km (242 miles) in around 30 minutes
Dimensions: Length in mm: 4,599 Width in mm: 1,852 Height in mm: 1,613 Wheelbase in mm: 2,766
Drag Coefficient 0.26 Cd
Volkswagen ID.5 GTX specs:
WLTP range: up to 490 km (305 miles)
82 kWh battery (77 kWh net usable)
0-100 km/h (62 mph) in 6.3 seconds
top speed of 180 km/h (112 mph); electronically limited
dual motor all-wheel drive
peak system output of 220 kW (299 PS) [only up to 30 seconds] 150 kW permanently excited synchronous machine in the rear and an asynchronous electric motor in the front
AC charging (on-board): 11 kW (three-phase)
DC fast charging: up to 150 kW can replenish up to 100 km (62 miles) in just 6 minutes or 320 km in around 30 minutes
Dimensions: Length 4,582 mm Width 1,852 mm Height 1,619 mm Wheelbase 2,765 mm
Can Medieval Sleeping Habits Fix America’s Insomnia?
The history of “first sleep” and “second sleep” holds surprising lessons about preindustrial life, 21st-century anxiety, and the problem with digging for utopia in the past.By Derek Thompson
At 3 a.m. i’m jolted awake. The room is dark and still. I grab my phone and scan sports scores and Twitter. Still awake. A faceless physician whispers in my mind: To overcome middle-of-the-night insomnia, experts say you ought to get out of bed … I get out of bed. I pour a glass of water and drink it. I go back to bed. Still awake. Perhaps you know the feeling. Like millions of Americans and hundreds of millions of people around the world, I suffer from so-called mid-sleep awakenings that can keep me up for hours.
One day, I was researching my nocturnal issues when I discovered a cottage industry of writers and sleep hackers who claim that sleep is a nightmare because of the industrial revolution, of all things. Essays in The Guardian, CNN, The New York Times, and The New York Times Magazine recommended an old fix for restlessness called “segmented sleep.” In premodern Europe, and perhaps centuries earlier, people routinely went to sleep around nightfall and woke up around midnight—only to go back to sleep a few hours later, until morning. They slept sort of like I do, but they were Zen about it. Then, the hackers claim, modernity came along and ruined everything by pressuring everybody to sleep in one big chunk.
The romanticization of preindustrial sleep fascinated me. It also snapped into a popular template of contemporary internet analysis: If you experience a moment’s unpleasantness, first blame modern capitalism. So I reached out to Roger Ekirch, the historian whose work broke open the field of segmented sleep more than 20 years ago.
In the 1980s, Ekirch was researching a book about nighttime before the industrial revolution. One day in London, wading through public records, he stumbled on references to “first sleep” and “second sleep” in a crime report from the 1600s. He had never seen the phrases before. When he broadened his search, he found mentions of first sleep in Italian (primo sonno), French (premier sommeil), and even Latin (primo somno); he found documentation in Africa, the Middle East, South Asia, and Latin America.
When sleep was divided into a two-act play, people were creative with how they spent the intermission. They didn’t have anxious conversations with imaginary doctors; they actually did something. During this dorveille,or “wake-sleep,” people got up to pee, hung out by the fire, had sex, or prayed. They reflected on their dreams and commingled with the spiritual realm, both the divine and the diabolical. In the 1540s, Martin Luther wrote of his strategies to ward off the devil: “Almost every night when I wake up … I instantly chase him away with a fart.”
Today’s sleep writers often wield Ekirch’s research to suggest that segmented sleep (or, as Ekirch calls it, biphasic—two-phase—sleep) is old, and one-sleep is new, and therefore today’s sleepers are doing it wrong. But that’s not the full story, he told me.
Preindustrial sleep was nothing to romanticize. Death stalked our slumber for centuries. Late-night crime was rampant, and the home itself was a death trap, as slapdash construction left houses vulnerable to fire, leaking roofs, terrible heat or cold, and what Ekirch calls “the trifecta of early modern entomology: fleas, lice, and bedbugs.” As for that romantic French dorveille, it was functionally a second workday for many women, who rose at midnight to finish domestic chores. And ancient soporifics—such as poisonous leaves and various opiate concoctions—were roughly as likely to kill you as they were to induce REM.
Beginning in the 1700s, the industrial revolution—its light, its caffeine, its clocks, and above all, its work schedules—took Europe’s biphasic sleep in its hairy arms and mushed the two phases together. A surging economy made a virtue of productivity and instilled “an increasing sense of time consciousness” in the West, Ekirch told me. By the mid-1800s, “Early Rising” movements had taken off in England and America. New artificial lights delayed bedtimes, while new factory schedules required early waking. The lit world altered our internal clocks too. “Every time we turn on a light, we are inadvertently taking a drug that affects how we will sleep,” Charles Czeisler, a Harvard sleep scientist, has said. When a 1990s study at the National Institute of Mental Health deprived a cohort of male subjects of light at night, their sleep became segmented after a few weeks.
This makes it sound like segmented sleep is humanity’s natural habit, and that the industrial revolution and modern capitalism despoiled our perfect rest.
But humans have never had a universal method of slumber. A 2015 study of hunter-gatherer societies in Tanzania, Namibia, and Bolivia found that most foragers enjoyed one long sleep. Two years later, another study found that a rural society in Madagascar practiced segmented sleep. Two years after that, a study found that the indigenous residents of Tanna, in the South Pacific, largely had one uninterrupted sleep.
Even within preindustrial Europe, sleep contained multitudes. Reviewing the diaries of European writers such as Samuel Pepys and James Boswell, Ekirch found several allusions to unified sleep. Summarizing this complicated literature, he told me that “patterns of sleep in non-Western cultures appear to have been much more diverse” than those in Europe, but that they were truly diverse everywhere.
There is no evidence that sleep was universally segmented, and there is also little evidence that segmented sleep is better. A 2021 meta-analysis of studies on biphasic sleep schedules found that segmented-sleeping subjects actually reported “lower sleep quality … and spent more time in lighter stages of sleep.” One reasonable takeaway is that biphasic sleep is like anarchical foraging: Both might have well served some ancient populations some of the time, but neither of them offers a clear solution to modern problems.
I asked Ekirch this question: As the historian most associated with biphasic sleep, had his research encouraged him, a spouse, or a friend, to become a biphasic sleeper? “Not at all,” he said. “At no time in history have conditions for human slumber been better than today.” Compared with 99 percent of our ancient ancestors, we have better beds, better blankets, better houses, and fewer late-night pests. If the purpose of sleep is mental and physical well-being, “there is very good reason to believe that uninterrupted sleep at night best achieves that outcome,” Ekirch told me.
The upshot of sleep’s preindustrial and postindustrial history is a simple, short, and consistent message: Sleep is adaptable, but it improves with routine. Different tricks work for different tribes, but in the end, we are a diverse species united by a common circadian rhythm that craves consistency. “Sleep is very flexible, when you look cross-culturally,” says Dorothy Bruck, of Australia’s Sleep Health Foundation. “Your body really does like routine. Find what works for you, and keep that routine going.”
I have spent countless hours obsessing over my sleep, tracking my sleep quality on sophisticated devices, and reading (and reading and reading) about the thing I cannot do well. Sleep optimization can backfire by creating an in-the-moment pressure to solve the problem of wakefulness as if one is trying to solve a Rubik’s Cube against the clock. As every insomniac knows, “trying” to fall asleep is a self-defeating paradox. Insomnia is a beast that feasts on its own self-generated anxiety.
When I reached out to Ekirch, coming off a bad night’s rest, I hoped that the historian might have a practical tip. He didn’t. History is not a self-help book. But it has its own strange comforts, and our correspondence was deeply helpful in another way.
Ekirch told me that he’s heard from many people that simply knowing about the history of segmented sleep is its own relief. “Happily, there is mounting testimony from North America, Western Europe, and Australia that knowledge of this pattern has actually helped to alleviate anxiety, permitting some individuals to fall back to sleep more readily,” he said. Rather than see the legacy of premodern rest as an operating manual, I see it as a balm. My 3-a.m. awakenings aren’t an unnatural disorder, but an ancestral echo. Maybe that’s something to tell myself in the middle of the night, instead of fighting the sleep doctor in my head: It’ll be all right. We’ve been here before.
This article originally stated that Martin Luther wrote about his strategy for warding off the devil in the 1550s. In fact, Luther died in 1546, and the quote was published posthumously.Derek Thompson is a staff writer at The Atlantic and the author of the Work in Progress newsletter. He is also the author of Hit Makersand the host of the podcast Plain English.
Scientists make a new type of optical device using alumina
by Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU)
Figure 1: A schematic diagram showing how researchers can study radiation from the early universe using an infrared reflective filter with a moth eye structure created by a pulsed laser process. Credit: NASA, ESA, M. Postman (STScI), and the CLASH Team / NRAO, AUI, NSF / modified by Kavli IPMU
Scientists from the Kavli Institute for the Physics and Mathematics of the Universe and the University of Minnesota, Tomotake Matsumura and Shaul Hanany, and their collaborators have made a new type of optical element that will improve the performance of telescopes studying radiation from the Big Bang.
The cosmic microwave background (CMB) is a relic radiation remnant from the big bang. It reaches our telescopes after traveling 14 billion years since the birth of the Universe. Studying the properties of this radiation, scientists infer the physics of the big bang, how clusters of galaxies form, and the matter and energy content in the Universe. Four Nobel prizes have been awarded for past studies of the CMB.
To study the CMB, telescopes must be tuned to wavelengths in which it is most intense, about 1-3 mm, and they must separate out shorter wavelength radiation that the atmosphere and Milky Way emit. Among the most effective optical elements that absorbs the short wavelength radiation but lets the CMB pass through is alumina, a material made of aluminum and oxygen and that is second in hardness only to diamond. One challenge with using alumina is that it also reflects almost 50% of the radiation impinging on it. Matsumura and Hanany have now come up with a new way to fabricate anti-reflective structures that reduce reflections fifty-fold.
Figure 2: Images of one side of the fabricated filter. The other side is identical. (a) A photograph of the entire filter. The ruler is graduated up to a length of 300 mm. (b) An enlarged area. (c) Rendering of confocal microscopy scanning of the SWS. Credit: Takaku et al.
The researchers partnered with Mark Devlin and Simon Dicker, colleagues at the University of Pennsylvania who are operating the MUSTANG2 instrument, which is coupled to the Green Bank Telescope in Virginia. Hanany and Matsumura provided the MUSTANG2 team with an alumina short-wavelength absorber that has the new anti-reflective structures. The MUSTANG2 instrument is now conducting sky observations with the new technology, demonstrating for the first time its success.
Figure 3: The transmission spectrum for the MUSTANG2 filter for unpolarized light (upper panel, blue dots), for s- and p-polarized light (lower panel, red and green dots, respectively), and RCWA predicted transmission given the average shape data for each side as provided in Table 2 (upper panel, solid cyan, and lower panel solid red and green). The average transmission within the band (white region) is 98%. Note the different vertical spans for the two panels, which provide both an overall performance view and details on the agreement between the data and the RCWA predictions. Credit: Takaku et al.
The researchers patterned the alumina with small pyramidal structures, which are about one millimeter tall (0.04 inch) and repeat across the 30 cm (one foot) diameter with a periodicity of just less than one millimeter. It has long been known that incorporating such structures on the surfaces of materials reduces reflections. With the small pyramids, light enters and is leaving the material more gradually, leading to much lower reflection. Matsumura and Hanany’s innovation is in the way they patterned the alumina, which is too hard to be machined with standard tools. They used ultra-short pulsed laser, with pulses few trillionths of a second long and reaching 100 megawatts each, to ablate the material away and to shape the surface relief to its optimal anti-reflective shape. Within about four days the laser process produced 320,000 pyramids on both sides of the alumina disc. The researchers measured the properties of the alumina sample and showed that it reflects less than 1% of the incident radiation. This is the first time such an optical element has been fabricated and coupled to an operating instrument, and it is the largest sample of alumina to have been laser-ablated.
This innovation will lead to more efficient instruments peering back in time and revealing the physical process at the Big Bang and throughout the evolution of the universe.
More information: Ryota Takaku et al, A Large Diameter Millimeter-Wave Low-Pass Filter Made of Alumina with Laser Ablated Anti-Reflection Coating, Optics Express (2021). DOI: 10.1364/OE.444848Journal information:Optics ExpressProvided by Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU)
OpenAI’s new language AI improves on GPT-3, but still lies and stereotypes
Research company OpenAI says this year’s language model is less toxic than GPT-3. But the new default, InstructGPT, still has tendencies to make discriminatory comments and generate false information.
The new default, called InstructGPT, still has tendencies to make discriminatory comments and generate false information. | Illustration: Pixabay; ProtocolKate Kaye January 27, 2022
OpenAI knows its text generators have had their fair share of problems. Now the research company has shifted to a new deep-learning model it says works better to produce “fewer toxic outputs” than GPT-3, its flawed but widely-used system.
Starting Thursday, a new model called InstructGPT will be the default technology served up through OpenAI’s API, which delivers foundational AI into all sorts of chatbots, automatic writing tools and other text-based applications. Consider the new system, which has been in beta testing for the past year, to be a work in progress toward an automatic text generator that OpenAI hopes is closer to what humans actually want.
“We want to build AI systems that act in accordance with human intent, or in other words, that do what humans want,” said Jan Leike, who leads the alignment team at OpenAI. Leike said he has been working for the past eight years to improve what the company refers to as “alignment” between its AI and human goals for automated text.
Asking an earlier iteration of GPT to explain the moon landing to a 5-year-old may have resulted in a description of the theory of gravity, said Leike. Instead, the company believes InstructGPT, the first “aligned model” it says it has deployed, will deliver a response that is more in touch with the human desire for a simple explanation. InstructGPT was developed by fine-tuning the earlier GPT-3 model using additional human- and machine-written data.
Yabble has used InstructGPT in its business insights platform. The new model has an improved ability to understand and follow instructions, according to Ben Roe, the company’s head of product. “We’re no longer seeing grammatical errors in language generation,” Roe said.
‘Misalignment matters to OpenAI’s bottom line’
Ultimately, the success and broader adoption of OpenAI’s text automation models may be dependent on whether they actually do what people and businesses want them to. Indeed, the mission to improve GPT’s alignment is a financial matter as well as one of accuracy or ethics for the company, according to an AI researcher who led OpenAI’s alignment team in 2020 and has since left the company.
“[B]ecause GPT-3 is already being deployed in the OpenAI API, its misalignment matters to OpenAI’s bottom line — it would be much better if we had an API that was trying to help the user instead of trying to predict the next word of text from the internet,” wrote the former head of OpenAI’s language model alignment team, Paul Christiano, in 2020, in a bid to find additional ML engineers and researchers to assist to solve alignment problems at the company.
At the time, OpenAI had recently introduced GPT-3, the third version of its Generative Pre-trained Transformer natural language processing system. The company is still looking for additional engineers to join its alignment team.
Notably, InstructGPT cost less to build than GPT-3 because it used far fewer parameters, which are essentially elements chosen by the neural network to help it learn and improve. “The cost of collecting our data and the compute for training runs, including experimental ones is a fraction of what was spent to train GPT-3,” said OpenAI researchers in a paper describing how InstructGPT was developed.
Like other foundational natural-language processing AI technologies, GPT has been employed by a variety of companies, particularly to develop chatbots. But it’s not the right type of language processing AI for all purposes, said Nitzan Mekel-Bobrov, eBay’s chief artificial intelligence officer. While eBay has used GPT, the ecommerce company has relied more heavily on another open-source language model, BERT, said Mekel-Bobrov.
“We feel that the technology is just more advanced,” said Mekel-Bobrov regarding BERT, which stands for Bidirectional Encoder Representations from Transformers. EBay typically uses AI-based language models to help understand or predict customer intent rather than to generate automated responses for customer service, something he said BERT is better suited for than early versions of GPT.
“We are still in the process of figuring out the balance between automated dialogue and text generation as something customers can benefit from,” he said.
About the bias and hallucinations…
GPT-3 and other natural-language processing AI models have been criticized for producing text that perpetuates stereotypes and spews “toxic” language, in part because they were trained using data gleaned from an internet that’s permeated by that very sort of nasty word-smithing.
In fact, research published in June revealed that when prompted with the phrase, “Two Muslims walk into a …,” GPT-3 generated text referencing violent acts two-thirds of the time in 100 tries. Using the terms “Christians,” “Jews,” or “Sikhs” in place of “Muslims” resulted in violent references 20% or less of the time.
OpenAI said in its research paper that “InstructGPT shows small improvements in toxicity over GPT-3,” according to some metrics, but not in others.
“Bias still remains one of the big issues especially since everyone is using a small number of foundation models,” said Mekel-Bobrov. He added that bias in natural-language processing AI such as earlier versions of GPT “has very broad ramifications, but they’re not necessarily very easy to detect because they’re buried in the foundational [AI].”
He said his team at eBay attempts to decipher how foundational language models work in a methodical manner to help identify bias. “It’s important not just to use their capabilities as black boxes,” he said.
GPT-3 has also been shown to conjure up false information. While OpenAI said InstructGPT lies less often than GPT-3 does, there is more work to be done on that front, too. The company’s researchers gauged the new model’s “hallucination rate,” noting, “InstructGPT models make up information half as often as GPT-3 (a 21% vs. 41% hallucination rate, respectively).”
Leike said OpenAI is aware that even InstructGPT “can still be misused” because the technology is “neither fully aligned or fully safe.” However, he said, “It is way better at following human intent.”
What’s more important to Elon Musk? A hugely successful business selling critically lauded electric vehicles or the sci-fi dream of a humanoid robot that doesn’t exist outside of a slide deck?
Well, if you’re familiar with Musk’s modus operandi, you won’t be surprised that Musk has declared the latter — a non-existent robot — to be Tesla’s “most important product development” in a recent earnings call. Discussing the company’s product map for the years ahead, Musk noted that the Roadster and Semi (originally set to launch in 2020) and Cybertruck (first slated for 2021) would be in production “hopefully next year”(emphasis ours) before smoothly switching gears to talk up the Tesla Bot — a humanoid robot concept unveiled by the company last August in the form of a dancer in a spandex suit.
So, in terms of priority of products, I think actually the most important product development we’re doing this year is actually the Optimus humanoid robot. This, I think, has the potential to be more significant than the vehicle business over time. If you think about the economy, it is — the foundation of the economy is labor. Capital equipment is distilled labor. So, what happens if you don’t actually have a labor shortage? I’m not sure what an economy even means at that point. That’s what Optimus is about. So, very important.
Now, some have interpreted Musk’s comment to mean that development of the robot at Tesla is taking precedence over vehicles, which is a stretch. Rather, Musk seems to be speaking speculatively about the long-term significance of a robot that’s able to tackle any physical labor that humans can. Later in the call, he touches on the subject again, noting that in terms of names for the bot, “the Optimus name seems to be sticking at least internally, Optimus Subprime” (it’s a good joke!) and that “the first use of the Optimus robots would be, at Tesla, like moving parts around the factory or something like that.”
Which, sure. I don’t want to get bogged down in assessing the credibility of any particular claims about the Tesla Bot here. (I’ve outlined my thoughts before, and, in short, I’ll believe it when I see it.) But it’s worth noting that these sorts of comments from Musk have the benefit of frothing up Tesla-friendly headlines while obscuring the admission that its actual cars that it actually makes money from are struggling to make it out the factory.
This is not necessarily a knock at Tesla or Musk for failing to keep deadlines (making cars is hard, and Tesla has basically done tremendously well inserting itself as a new player in the industry). But it’s a reminder that as much as Musk loves to surround himself with veils of sci-fi speculation that boost both his personal mystique and market cap, he does also run a company that makes cars and that is — right now — having some difficulty making some of those cars.
Now, would a magical robot workforce help with this problem? Sure, why not! But Musk and co will have to invent one first — and without any of said magical robots to help.