https://www.lifehacker.com.au/2019/10/our-favorite-features-in-the-new-firefox-70-browser/

The Best Features In The New Firefox 70 Browser

⋅ Filed to:

It’s time to update to a brand-new version of Firefox Quantum. While your browser will eventually do this for you, I recommend forcing the issue by clicking the hamburger icon, clicking on “Help,” and then clicking on “About Firefox.” And while Firefox 70 downloads to your desktop or laptop, here’s a quick look at what’s new.

Saving even more of your Mac’s battery

Thanks to some improvements in Firefox’s compositing pipeline — which you can read about here, if you want the geeky details — the browser now uses a lot less power than ever before, a 3x improvement in some cases. I think this illustration from Mozilla says it all:

Illustration: Mozilla

And if you’re curious whether Firefox’s improvements — switching to partial compositing — are live on other platforms, Mozilla had this to say:

“Firefox uses partial compositing on some platforms and GPU combinations, but not on all of them. Notably, partial compositing is enabled in Firefox on Windows for non-WebRender, non-Nvidia systems on reasonably recent versions of Windows, and on all systems where hardware acceleration is off. Firefox currently does not use partial compositing on Linux or Android.”

Read a report of all the trackers Firefox blocks for you

Not only is Firefox now blocking cross-site tracking cookies on social sites like LinkedIn, Facebook and Twitter — a brand-new feature — but you can now view your very own Privacy Protections report that shows you everything Firefox has been blocking in the background. To view it, simply click on the shield icon next to your address bar and select “Show report.”

Screenshot: David Murphy

Not only will you get a somewhat detailed breakdown of how many trackers Firefox blocked over the past week, but you’ll also get a quick overview of how many data breaches the email address associated with your Firefox account has been involved in lately — if you also sign up for that free feature.

Photo: David Murphy

If you want to be a real baller, I recommend setting about:protections as your browser homepage within Firefox’s settings (in the Home section). That way, whenever you click the home icon — or tap ALT+Home on your keyboard — you’ll be able to jump right to your privacy report. I figure that’s more useful than a blank page, right?

Make even more secure passwords for sites

Firefox is making it even easier to generate secure passwords for new accounts (or pipe in your older, saved, secure passwords when you’re logging in) via the browser’s Lockwise feature. As Mozilla writes:

“Now, when you create an account you’ll be auto-prompted to let Lockwise generate a safe password, which you can save directly in the Firefox browser. For current accounts, you can right click in the password field to access securely generated passwords through the fill option. All securely generated passwords are auto-saved to your Firefox Lockwise account.”

That’s a pretty standard inclusion for any kind of password manager. What I also like, though, is the redesigned Lockwise dashboard, which you can access by clicking on the hamburger icon and clicking on “Logins and Passwords.”

Not only does this screen make it easy to delete, create, and update your passwords, but you’ll also receive a notification if any passwords currently stored in Lockwise have been involved in any kind of data breach. It’s analogous to 1Password’s Watchtower feature, which I love.

Even more Windows users get webrender

You might not know what webrender is, but you’ll feel its effects if you can use it — improved performance and stability as a result of your GPU rendering web content instead of your CPU. In Firefox 70, even more Windows users are going to get a chance to use webrender, as Mozilla has opened the feature up to those running Intel integrated graphics and a display resolution lower than 1920-by-1200 pixels.

 

https://www.theguardian.com/technology/2019/oct/24/mind-reading-tech-private-companies-access-brains

Mind-reading tech? How private companies could gain access to our brains

Social media companies can already use online data to make reliable guesses about pregnancy or suicidal ideation – and new BCI technology will push this even further

Brain-computer interface technology is being followed by research institutions and technology companies alike.
 Brain-computer interface technology is being followed by research institutions and technology companies alike. Illustration: Guardian Design/The Guardian

It’s raining on your walk to the station after work, but you don’t have an umbrella. Out of the corner of your eye, you see a rain jacket in a shop window. You think to yourself: “A rain jacket like that would be perfect for weather like this.”

Later, as you’re scrolling on Instagram on the train, you see a similar-looking jacket. You take a closer look. Actually, it’s exactly the same one – and it’s a sponsored post. You feel a sudden wave of paranoia: did you say something out loud about the jacket? Had Instagram somehow read your mind?

While social media’s algorithms sometimes appear to “know” us in ways that can feel almost telepathic, ultimately their insights are the result of a triangulation of millions of recorded externalized online actions: clicks, searches, likes, conversations, purchases and so on. This is life under surveillance capitalism.

As powerful as the recommendation algorithms have become, we still assume that our innermost dialogue is internal unless otherwise disclosed. But recent advances in brain-computer interface (BCI) technology, which integrates cognitive activity with a computer, might challenge this.

In the past year, researchers have demonstrated that it is possible to translate directly from brain activity into synthetic speech or text by recording and decoding a person’s neural signals, using sophisticated AI algorithms.

While such technology offers a promising horizon for those suffering from neurological conditions that affect speech, this research is also being followed closely, and occasionally fundedby technology companies like Facebook. A shift to brain-computer interfaces, they propose, will offer a revolutionary way to communicate with our machines and each other, a direct line between mind and device.

But will the price we pay for these cognitive devices be an incursion into our last bastion of real privacy? Are we ready to surrender our cognitive liberty for more streamlined online services and better targeted ads?

A BCI is a device that allows for direct communication between the brain and a machine. Foundational to this technology is the ability to decode neural signals that arise in the brain into commands that can be recognized by the machine.

Because neural signals in the brain are often noisy, decoding is extremely difficult. While the past two decades have seen some success decoding sensory-motor signals into computational commands – allowing for impressive feats like moving a cursor across a screen with the mind or manipulating a robotic arm – brain activity associated with other forms of cognition, like speech, have remained too complex to decode.

But advances in deep learning, an AI technique that mimics the brain’s ability to learn from experience, is changing what’s possible. In April this year, a research team at the University of California, San Francisco, published results of a successful attempt at translating neural activity into speech via a deep-learning powered BCI.

The team placed small electronic arrays directly on the brains of five people and recorded their brain activity, as well as the movement of their jaws, mouths and tongues as they read out loud from children’s books. This data was then used to train two algorithms: one learned how brain signals instructed the facial muscles to move; the other learned how these facial movements became audible speech.

Once the algorithms were trained, the participants were again asked to read out from the children’s books, this time merely miming the words. Using only data collected from neural activity, the algorithmic systems could decipher what was being said, and produce intelligible synthetic versions of the mimed sentences.

According to Gopala Anumanchipalli, a speech scientist who led the study, the results point a way forward for those suffering from “locked in” conditions, like amyotrophic lateral sclerosis or brain stroke, where the patient is conscious but cannot voluntarily move the muscles that correspond to speech.

“At this stage we are using participants who can speak so this is only proof of concept,” he said. “But this could be transformative for people who have these neurological disabilities. It may be possible to restore their communication again.”

But there are also potential applications for such technology beyond medicine. In 2017, Facebook announced that it would be investing in the development of non-invasive, wearable BCI that would allow Facebook users to “type with their brains”.

Since then, Facebook has funded research to achieve this goal, including a study by the same lab at University of California, San Francisco. In this study, participants listened to multiple-choice questions and responded aloud with answers while signals were being recorded directly from their brains, which served as input data to train decoding algorithms. After this, participants listened to more questions and again responded aloud, at which point the algorithms translated the one-word answers into text on a screen in real time.

While Facebook eagerly reported that these results indicated a step towards their goal of creating a device that will “let people type just by imagining the words they want to say”, according to Marc Slutzky, professor of neurology at Northwestern University, this technology is still a long way from what most people commonly understand as “mind-reading”.

State-of-the-art BCIs can only decode the neural signals associated with attempted speech, or the physical act of articulation, Slutzky told me. Decoding “imagined” speech, which is what Facebook ultimately wants to achieve, would require translating from abstract thoughts into language, which is a far more confounding problem.

“If someone imagines saying a sentence in their head but doesn’t at least attempt to physically articulate it, it is unclear how and where in the brain the imagined sentence is conceived,” he said.

Indeed, while many philosophers of language in the 20th century proposed that we think in sentence-like strings of language, use of brain imaging technology like electroencephalography (EEG) and electrocorticography (ECoG) has since revealed that thinking more probably happens in a complex combination of images and associations.

According to John Dylan Haynes, professor of neuroscience at the Charité Universitätsmedizin in Berlin, it is possible to decode and read out some of these signals to some degree, but this is still far off mind-reading. “That would require a full understanding of the language of the brain,” he said. “And to be very clear, we don’t fully understand the language of the brain.”

But even if BCI technology can’t directly read minds doesn’t mean that a device couldn’t be used to reveal valuable and sensitive data about an individual. The structural brain scans recorded when someone is connected to a BCI, Haynes said, can reveal with reasonable accuracy whether someone is suffering from certain diseases or whether they have some other cognitive impairment.

While the management of this collateral data is heavily regulated in research institutes Haynes told me that no such regulations are in place for technology companies. Observing how some companies have, over the past decade, transformed troves of personal data into profit while displaying a wanton attitude to securing such data makes Haynes wary of the growing consumer BCI industry. “I’d be very careful about giving up our cognitive information to companies,” he said.

According to Marcello Ienca, a research fellow at ETH Zurich who evaluates the ethics of neuro-technology, the implications of private companies gaining access to cognitive data should be carefully considered.

“We have already reached a point where analysts at social media companies can use online data to make reliable guesses about pregnancy or suicidal ideation,” he said.

“Once consumer BCIs become widespread and we have enough brain recordings in the digital eco-system, this incursion into parts of ourselves that we thought were unknowable is going to be even more pronounced.”

For some, however, the development of BCI technology is not only about the potential consumer applications, but more profoundly about merging humans with machines. Elon Musk, for example, has said that the driving impetus in starting his own BCI company, Neuralink, which wants to weave the brain with computers using flexible wire threads, is to “achieve a symbiosis with artificial intelligence”.

Adina Roskies, professor of philosophy at Dartmouth University, says that while such a “cyborg future” might seem compelling, it raises thorny ethical questions around identity and moral responsibility. “When BCIs decode neural activity into some sort of action [like moving a robot arm] an algorithm is included in the cognitive process,” she explained. “As these systems become more complex and abstract, it might become unclear as to who the author of some action is, whether it is a person or machine.”

As Christian Herff, professor in the department of neurosurgery at Maastricht University explained to me, some of the systems currently capable of translating neural activity into speech incorporate techniques that are similar to predictive texting. After brain signals are recorded, a predictive system, not unlike those that power Siri and Alexa, tells the algorithm which words can be decoded and in what order they should go. For example, if the algorithm decodes the phrase “I is” the system might change that to “I am”, which is a far more likely output.

“In the case where people can’t articulate their own words, these systems will help produce some sort of verbalization that they presumably want to produce,” Roskies said. “But given what we know about predictive systems, you can at least imagine cases in which these things produce outputs that don’t directly reflect what the person intended.”

In other words, instead of reading our thoughts, these devices might actually do some of the thinking for us.

Roskies emphasized that we are still a fair way off such a reality, and that oftentimes, companies overhype technological ability for the sake of marketing hype. “But I do believe that the time to start thinking through some of the ethical implications of these systems is now,” she said.

 

https://www.futurity.org/cellular-sleep-aging-2191692-2/

DOES CELLULAR SLEEP HOLD THE KEY TO AGING?

(Credit: Getty Images)

New information about cellular sleep could lead to interventions in the aging process.

As we age, more and more of our cells enter a coma-like state, called senescence, and can no longer divide. Accumulation of senescent cells impairs normal tissue function, which further promotes aging. By contrast, many other cells in our body exist in a sleep-like state, called quiescence. These cells can wake up to divide in response to a trigger, like a wound, for example. Reversible quiescence is critical to tissue repair and stability.

Researchers found that, contrary to traditional understanding, as cells fall into deep sleep, they risk slipping into complete shutdown.

“Before, people thought cells slept to protect themselves from going into a coma-like state, thinking these two things are opposite,” says study coauthor Guang Yao, associate professor in the molecular and cellular biology department at the University of Arizona, who leads the lab that produced the research. “But we have demonstrated that sleep has different levels, and if it goes too deep, it will eventually go into a coma-like state of shutdown.”

Researchers also discovered that patterns of gene expression can signal just how deeply a cell is sleeping. Learning to manipulate the depth of cellular sleep could lead to aging interventions.

“If you understand the mechanics of aging, you might be able to reverse or slow it,” says Kotaro Fujimaki, doctoral student and first author of the paper in the Proceedings of the National Academy of Sciences.

DEEPER CELLULAR SLEEP

The researchers gradually pushed cells into increasingly deep sleep using pharmaceuticals. Gene expression patterns revealed that as cell sleep deepened, its ability to break down and recycle cellular material, called the lysosomal-autophagy function, also fell. Consequently, cells underwent harmful chemical instability and stress, which can eventually cause a coma-like state.

Alternatively, when the researchers cranked up this cellular recycling function, they observed reduced cellular stress. The cells also progressively moved into a shallower sleep, making them easier to reawaken and divide.

“By changing this recycler function, we can modulate the depth of cellular sleep,” Fujimaki says.

By analyzing how gene expression patterns changed as cells go into deeper sleep, the team created a predictive model able to assign cells sleep depth scores between 1 to 10, which can be applied to various cell types. Using this model, the team also identified cells in the coma-like state and undergoing aging. This suggests that cells in deep sleep share similar gene expression features as those in shutdown and aging.

IN SEARCH OF THE ‘DIMMER SWITCH’

Like a dimmer switch, cellular sleep ultimately exists on a spectrum of cellular function between on-state (division) and off-state (shutdown), a new paradigm proposed by the research team.

“The dimmer controls how difficult it is to wake the cell back up for tissue repair and regeneration,” Yao says.

The researchers believe revealing the dimmer switch that connects deep cellular sleep to the coma-like shutdown lays a foundation for developing novel strategies—by sliding the dimmer up—to slow aging.

Additional coauthors are from the University of Arizona; Peking University in China; and the University of Pittsburgh.

Support for the work came from the National Science Foundation, the National Institutes of Health, and the Chinese National Science and Technology Major Project and the Guangdong Province Key Research and Development Program.

Source: University of Arizona

https://thenextweb.com/syndication/2019/10/24/scientists-are-trying-to-build-a-conscious-machine-heres-why-it-will-never-work/

Scientists are trying to build a conscious machine — here’s why it will never work

Scientists are trying to build a conscious machine — here’s why it will never work

Many advanced artificial intelligence projects say they are working toward building a conscious machine, based on the idea that brain functions merely encode and process multisensory information. The assumption goes, then, that once brain functions are properly understood, it should be possible to program them into a computer. Microsoft recently announced that it would spend US$1 billion on a project to do just that.

So far, though, attempts to build supercomputer brains have not even come close. A multi-billion-dollar European project that began in 2013 is now largely understood to have failed. That effort has shifted to look more like a similar but less ambitious project in the U.S., developing new software tools for researchers to study brain data, rather than simulating a brain.

Some researchers continue to insist that simulating neuroscience with computers is the way to go. Others, like me, view these efforts as doomed to failure because we do not believe consciousness is computable. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell – which simply can’t be handled in the way today’s computers sense, process and store data.

Brains don’t operate like computers

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work.

The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.

Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture.

Computation and awareness

In my own recent work, I’ve highlighted some additional reasons that consciousness is not computable.

Werner Heisenberg. Bundesarchiv, Bild 183-R57262/Wikimedia CommonsCC BY-SA
Erwin Schrödinger. Nobel Foundation/Wikimedia Commons
Alan Turing.Wikimedia Commons

A conscious person is aware of what they’re thinking, and has the ability to stop thinking about one thing and start thinking about another – no matter where they were in the initial train of thought. But that’s impossible for a computer to do. More than 80 years ago, pioneering British computer scientist Alan Turing showed that there was no way ever to prove that any particular computer program could stop on its own – and yet that ability is central to consciousness.

His argument is based on a trick of logic in which he creates an inherent contradiction: Imagine there was a general process that could determine whether any program it analyzed would stop. The output of that process would be either “yes, it will stop” or “no, it won’t stop.” That’s pretty straightforward. But then Turing imagined that a crafty engineer wrote a program that included the stop-checking process, with one crucial element: an instruction to keep the program running if the stop-checker’s answer was “yes, it will stop.”

Running the stop-checking process on this new program would necessarily make the stop-checker wrong: If it determined that the program would stop, the program’s instructions would tell it not to stop. On the other hand, if the stop-checker determined that the program would not stop, the program’s instructions would halt everything immediately. That makes no sense – and the nonsense gave Turing his conclusion, that there can be no way to analyze a program and be entirely absolutely certain that it can stop. So it’s impossible to be certain that any computer can emulate a system that can definitely stop its train of thought and change to another line of thinking – yet certainty about that capability is an inherent part of being conscious.

Even before Turing’s work, German quantum physicist Werner Heisenberg showed that there was a distinct difference in the nature of the physical event and an observer’s conscious knowledge of it. This was interpreted by Austrian physicist Erwin Schrödinger to mean that consciousness cannot come from a physical process, like a computer’s, that reduces all operations to basic logic arguments.

These ideas are confirmed by medical research findings that there are no unique structures in the brain that exclusively handle consciousness. Rather, functional MRI imaging shows that different cognitive tasks happen in different areas of the brain. This has led neuroscientist Semir Zeki to conclude that “consciousness is not a unity, and that there are instead many consciousnesses that are distributed in time and space.” That type of limitless brain capacity isn’t the sort of challenge a finite computer can ever handle.

https://venturebeat.com/2019/10/24/apple-seeks-ar-headset-patent-for-clear-lenses-that-can-turn-opaque/

Apple seeks AR headset patent for clear lenses that can turn opaque

An example of photochromic lenses.

Above: An example of photochromic lenses.

Image Credit: Eyeguard

As Apple’s augmented reality glasses continue moving closer to their reported 2020 launch date, the company’s patents are providing a paper trail of some of the key concepts its engineers have been working on. Today, a newly published patent application establishes how the company could transition its AR glasses into either a VR mode or higher-contrast AR mode, using photochromic lenses with opacity controls, much like modern smart windows.

Discussing its invention solely in terms of a head-mounted display, Apple refers to the technology as an adjustable opacity system — a layer within the lenses that uses ultraviolet light to control the level of lens transparency from clear to opaque. Sunglasses with a more basic form of the same concept became popular 20 years ago, automatically shifting to darker tints when they went outside and were exposed to the sun and becoming more transparent when worn indoors.

The system Apple is proposing isn’t passive or even necessarily full-lens in scope; rather, it can “selectively darken portions of the real-world light from view.” Miniature mirrors can direct ultraviolet light to individual pixels within the lens, dimming or fully blocking light “to allow improved contrast when displaying computer-generated content over the real-world objects.”

Current AR glasses are typically stuck with a single level of lens transparency that can make digital content look ghostly, while providing a simultaneous “mixed” view of real and digital elements. Assuming Apple’s system is as capable as the patent suggests, the lenses could go partially opaque in a manner that boosts the brightness of digital objects, or potentially fully opaque in a manner akin to VR glasses.

Whether the photochromic lenses appear in Apple’s AR glasses remains to be seen. While it’s fairly easy to apply for a patent on a technology like this, the concept contemplates the use of pixel-sized micro-mirrors and potentially even a heating element, which might drive up the headset’s cost, size, and electronic complexity, much like using retinal projectors instead of basic screens. On the other hand, implementing the feature could make the glasses useful under more conditions.

If Apple’s rollout proceeds according to the expected schedule, we’ll know more next year. Some support for stereoscopic AR glasses was recently discovered in beta versions of iOS 13.

https://electrek.co/2019/10/24/tesla-increase-vehicle-power-range-software-update/

Tesla has announced plans to increase vehicle power and range through a new software update coming in the next few weeks.

When Tesla launched the $35,000 Model 3 earlier this year, the automaker surprised many by announcing that it will increase the range of all existing Long-Range Model 3 vehicles delivered to date.

CEO Elon Musk said at the time:

There’s also some things we’ve been able to do for existing customers that are pretty cool. Tesla is as much a software company as a hardware company and we’ve been able to via firmware improve the range of the long-range rear wheel drive car from 310 miles to 325 miles. This will affect all customers, including those that were all long range cars shipped to date and new cars. So both existing and new customers will get a 15 mile range increase from 310 to 325.

Tesla ended up pushing the update in March — although it didn’t affect all the Model 3 Long Range vehicles the same way.

During a conference call with analysts after Tesla’s Q3 2019 earnings, Musk said that they have more improvements coming through software updates:

I forgot to mention, we’re also expecting there’s going to be an over-the-air improvement that will improve the power of the Model S, X, and 3. That’s, by the way, coming in a few weeks. It should be in the order of 5% power improvement due to improved firmware.

Tesla VP of technology Drew Baglino said that they have found ways to optimize the motor control, and it should result in about “5% improvement for all Model 3 customers and 3% for Model S and Model X” customers.

Musk also said that the upcoming update will also bring improvements to the range, single-pedal driving, Supercharging speed, comfort, and feel.

They didn’t specify which variants of each model will get the improvements beyond the fact Supercharging speed is going to improve for Model 3 Standard Range and Standard Range Plus.

As for the motor optimizations, if they are going to affect both Model S/X and Model 3, it’s likely for the more recent “Raven” Model S and X, which have a similar motor as Model 3.

Electrek’s Take

The idea of a car receiving performance improvements through software updates is impressive, but we weren’t particularly impressed by the range increase for Model 3 Long Range RWD.

We previously reported on how Tesla played with EPA ratings to advertise all Model 3 versions with 310-mile range, even though the Long Range version was able to get more.

So they probably could have always advertised the car with more range.

However, this is a lot more impressive.

It sounds like Tesla has found ways to safely push their electric motors higher and even make them more efficient.

I am looking forward to seeing exactly how it’s going to affect the vehicles.

https://www.sciencenews.org/article/lab-grown-organoids-more-stressed-out-than-actual-brain-cells

Lab-grown organoids are more stressed-out than actual brain cells

Brainlike clumps of cells don’t behave like cells taken from tissue

organoid
Three-dimensional clusters of brain cells grown in flasks, a type of organoid (one shown), show signs of stress, a study shows.

CHICAGO — Brain cells grown into clumps in flasks are totally stressed-out and confused. Cells in these clumps have ambiguous identities and make more stress molecules than cells taken directly from human brains, researchers reported October 22 at the annual meeting of the Society for Neuroscience.

These cellular clumps are grown using stem cells made from skin or blood, which under the right conditions can be coaxed into forming three-dimensional clusters of brain cells. These clusters, a type of organoid, are thought to re-create some aspects of early human brain development, a period that is otherwise difficult to study (SN: 2/20/18).

The new results highlight underappreciated differences between these organoids and the human brains they are designed to mimic. “Most of the papers out there are extolling the virtues of these things,” says study coauthor Arnold Kriegstein, a developmental neurobiologist at the University of California, San Francisco. But the new study reveals “significant issues that nobody has addressed yet.”

Kriegstein and colleagues compared genetic activity in human cells from brain tissue in early development with human cells grown in an organoid. Cells in the organoids had more active genes involved in stress responses. What’s more, these organoid cells didn’t fit into the neat categories of cells in actual brain tissue. Instead, some of the organoid cells showed features of two distinct categories simultaneously. “They are not normal,” Kriegstein says.

Data from other labs showed the same stressed-out gene behavior in organoid cells, says study coauthor Aparna Bhaduri, a developmental neurobiologist also at UCSF. “It’s a universal phenomenon,” she says.

The findings are “scientifically satisfying” because they draw attention to a challenge the organoid field faces, says neuroscientist Michael Nestor of the Hussman Institute for Autism in Baltimore. “There’s been a lot of hype,” about brain organoids’ potential, he says. “I’m excited too, but we’ve got to take a step back. I think this work does that.”

Abnormal human organoid cells became a little bit more normal when implanted into a more hospitable environment — mice’s brains. After growing for several weeks in a more normal environment with a blood supply, the organoid cells seemed less stressed. And the cells no longer seemed as confused about their identities.

The researchers don’t know exactly what causes the abnormalities in the organoid cells. It might have to do with the nourishing liquid that surrounds the blobs, or even differences in the mechanical forces that press against them.

Nestor says that with refinements, organoids grown in lab dishes can better approximate certain aspects of brain development. “There may be some mix of small molecules or media or temperature regulation that will get you there,” he says. “It’s just that it is going to take a while to figure out what that secret sauce is.”

https://electrek.co/2019/10/23/tesla-cybertruck-pickup-elon-musk-best-product-ever/

While Elon Musk didn’t want to talk about the upcoming Tesla pickup truck today, he still managed to hype the vehicle up by saying that the ‘Cybertruck’ could be Tesla’s best product ever.

Many people, including financial analysts, are trying to know more about the upcoming Tesla vehicle.

During the conference call following Tesla’s Q3 2019 earnings, Elon Musk didn’t want to comment about Tesla’s pickup truck when asked by an analyst.

Yet, the CEO went on to say that he thinks the pickup truck, which he is now calling the Tesla ‘Cybertruck’, might end up being the automaker’s “best product ever”

That’s despite the fact that Tesla is currently selling three successful vehicles and several industry-leading energy products.

However, Musk did say that he might be wrong about pickup.

The CEO surprised many when he said that the Tesla Pickup Truck will have a ‘really futuristic-like cyberpunk Blade Runner’ design without explaining what that meant other than saying that ‘it won’t be for everyone’.

On top of the comments not being clear, Musk didn’t really help anyone when he released a very cryptic teaser image for the pickup truck during the Model Y unveiling earlier this year.

He admitted that the design will not be for everyone, but he personally loves it. He even said that it looks like “armored personnel carrier from the future“.

As for the specs, Musk has also been hyping those up for the pickup truck.

Tesla’s CEO has previously sought suggestions for features to add to the Tesla truck under development and he revealed some planned features, like an option for 400 to 500 miles of range, Dual Motor All-wheel-drive powertrain with dynamic suspension, as well as ‘300,000 lbs of towing capacity’.

Earlier this summer, he said that the Tesla Pickup truck will cost less than $50,000 and ‘be better than a Ford F150’.

Musk said that Tesla plans to unveil the vehicle next month.

Electrek’s Take

To be fair, inventors often think that their next product is their best. It’s a good way to stay motivated.

However, the fear here is that pickup trucks have quite traditional designs and while it’s OK to move away from that, it could add another layer difficulty to converting pickup truck buyers to electric if the design is wildly different from what they are used to.

But let’s see what it actually looks like before panicking.

Also, I think it’s interesting that Elon has now referred to the Tesla pickup truck as the ‘Cybertruck’ on a few occasions recently.

Is it what we are calling it now? Let us know what you think in the comment section below.

https://www.livescience.com/why-some-people-need-less-sleep.html

Why Do Some People Need Less Sleep Than Others?

Woman running in the early morning.

(Image: © Shutterstock)

We all wish we could get by on less sleep, but one father and son actually can—without suffering any health consequences and while actually performing on memory tests as well as, or better than, most people.

To understand this rare ability, researchers at the University of California, San Francisco, first identified a genetic mutation—in both individuals—that they thought might deserve the credit. Then the scientists intentionally made the same small genetic spelling mistake in mice. The mice also needed less sleep, remembered better and suffered no other ill effects, according to a study published Oct. 16 in Science Translational Medicine.

Although a medication with the same benefits will not be available anytime soon—and might never materialize—the idea is incredibly appealing: take a pill that replicates whatever the father and son’s body does and sleep less, with no negative repercussions.

“I find the concept of a gene product that might potentially provide protection against comorbid disorders of restricted sleep tantalizing,” says Patrick Fuller, an associate professor of neurology at Harvard Medical School and Beth Israel Deaconess Medical Center in Boston, who was not involved with the work. “If true, this would indeed have ‘potential therapeutic implications,’ as well as provide another point of entry for exploring and answering the question ‘Why do we sleep?’ which remains [one] of the greatest mysteries in neuroscience.”

But as Jamie Zeitzer, an associate professor in the department of psychiatry and behavioral sciences at Stanford University, notes, “There often are trade-offs.” Zeitzer says he worries that even if a drug like this could be produced without causing significant side effects, it would still have social consequences. Some individuals might be forced or pressured to take medication so they could work more hours. Even if people will not need as much sleep, they will still need downtime, he insists.

The study’s senior author, Ying-Hui Fu, a professor of neurology at U.C.S.F., says it is far too early for such fantasies. Instead she is interested in better understanding the mechanisms of healthy sleep to help prevent diseases ranging from cancer to Alzheimer’s.

“These people sleep more efficiently,” she says of the father-son pair. “Whatever function sleep is doing for us, it takes us eight [hours to feel rested], but it takes them six or four hours. If we can figure out why they are more efficient, we can use that knowledge to help everybody to be more efficient.”

The subjects, who live on the East Coast, reached out to Fu’s team after hearing about a previous publication of its work. She would not reveal any more information about them to protect their privacy, except that they are fully rested after four to six hours of sleep instead of the more typical seven to nine. Also, Fu says, the duo and others with similar mutations are more optimistic, more active and better at multitasking than the average person. “They like to keep busy. They don’t sit around wasting time,” she says.

If most people sleep less than their body needs, that deficit that will affect memory and performance, in addition to measures of health, Fu notes. Many think they can get away with five hours of sleep on weeknights and compensate for the loss on weekends—but few actually can. “Your perception is skewed, so you don’t really know your performance is not as good,” she says. “That’s why people think [adequate sleep] doesn’t matter. But actually, it does. If you test them, it’s obvious.”

Joking about her own academic experience, Fu adds, “All those nights that I stayed up to study, it would have been better to go to sleep.” That’s not true of the father and son, who genuinely needed just 5.5 and 4.3 hours of sleep each night, respectively, the new paper showed.

Stanford’s Zeitzer praises the study’s design, saying, “Starting with humans and going to rodents and then back is great.” Mice, he adds, are not ideal role models because they regulate sleep differently than humans. And many individuals believe they are short sleepers but, when put in a lab, turn out to slumber the typical seven to nine hours.

People are naturally short sleepers if they rest a relatively brief time even when given the chance to sleep in on weekends or vacations. “If you get extra sleep when you have the opportunity, it’s generally a good sign that you need more sleep,” Zeitzer says.

Jerome Siegel, a professor of psychiatry at the University of California, Los Angeles, Center for Sleep Research, says he is comfortable with Fu’s group’s main finding: that the neuropeptide S receptor 1 (NPSR1) gene is important in regulating sleep. But it is likely only one small piece in a very complex process, he adds. And he is not convinced by the connection between sleep and memory the group claims. Sleep may have many functions, but there is no indication, he says, that needing less of it somehow boosts memory or cognition. “We consolidate memory while we sleep and while we’re awake, even when we’re anesthetized,” he says. “It’s not something that just occurs during sleep.”

The mechanism of action of the newly discovered mutation is not entirely clear. Fu and her team used a molecular probe to explore how the protein made by the father and son’s mutant NPSR1 gene differs from that made by a normal gene. The mutation, they found, makes the receptor more sensitive and active. The specifics of that process, Fu says, still have to be worked out.

Fu and her collaborators previously discovered two other genes involved in sleep. They are continuing to explore the mechanisms behind these genes, she says, adding that the speed of their work would be faster if they had more financial support.

Fu says once she and her colleagues can find about 10 pieces of the genetic puzzle, “each piece can serve as a point to build upon. And hopefully, someday we can know the whole picture.”

This article was first published at ScientificAmerican.com. © ScientificAmerican.com. All rights reserved Follow Scientific American on Twitter @SciAm and @SciamBlogs. Visit ScientificAmerican.com for the latest in science, health and technology news.

https://www.sciencenews.org/article/algae-inside-blood-vessels-could-act-as-oxygen-factories

Algae inside blood vessels could act as oxygen factories

An unconventional way to get O₂ to nerve cells might one day aid stroke patients

tadpole
Light-sensitive live algae (a type of cyanobacteria called Synechocystis) injected into a tadpole moves through its blood vessels, tinging them green.

S. ÖZUGUR, H. STRAKA

CHICAGO — It’s a strange mash-up, but it works: Algae living inside tadpoles’ blood vessels can pump out oxygen for nearby oxygen-starved nerve cells.

Using algae as local oxygen factories in the brain might one day lead to therapies for strokes or other damage from too little oxygen, researchers from Ludwig-Maximilians University Munich said October 21 at the annual meeting of the Society for Neuroscience.

“In the beginning, it sounds really funny,” says neurobiologist Suzan Özugur. “But it works, so why not? I think it has great potential.” Even more futuristic possibilities include using algae in the veins of astronauts on long-haul space missions, says neurobiologist Hans Straka.

Straka, Özugur and their colleagues had been bubbling oxygen into severed tadpole heads to keep nerve cells active. But in talks with botanists, Straka got the idea to use algae instead. “I wouldn’t call it crazy, but unconventional, let’s say.”

The researchers injected either green algae (Chlamydomonas reinhardtii) or cyanobacteria (Synechocystis) into tadpoles’ blood vessels, creating an eerie greenish animal. Both algae species make oxygen in response to light shining through the tadpoles’ translucent bodies.

cyanobacteria
Cyanobacteria (green) in a tadpole’s blood vessels produce oxygen in response to light.S. ÖZUGUR, H. STRAKA

When the researchers depleted the oxygen in the liquid surrounding a disembodied tadpole head, eye nerves fell silent and stopped firing signals. But a few minutes after a flash of algae-activating light, the nerves started firing signals again, the researchers found.

So far, reactions to the work range from “Frankenstein to ‘Wow, that’s really cool,’” says Straka.

It’s not clear how long the algae can survive in the blood vessels. Nor is it clear how well animals — including people — would tolerate the extra guests.

The discovery is unlikely to be used in the clinic, says neuroscientist Kathleen Cullen of Johns Hopkins University. But it does “motivate further exploration of unconventional approaches to advance the treatments for brain hypoxia, including stroke.”

Straka’s team plans to study whether the algae can do other jobs in the brain. The algae might also be able to supply nerve cells with glucose, or even molecules that influence nerve cell behavior, he says.