https://massivesci.com/articles/brain-machine-interface-brain-waves-ai-algorithm-text-speech/

A new machine can translate brain activity directly into written sentences

Translating up to 50 sentences at once, it’s about as accurate as human transcription

Thiago Arzua

Neuroscience

Medical College of Wisconsin

Youve probably been there: wanting to text someone quickly, but your hands are busy, maybe holding the groceries or cooking.

Siri, Alexa, and other virtual assistants have provided one new layer of interaction between us and our devices, but what if we could move beyond even that? This is the premise of some brain-machine interfaces (BMIs). We covered these at Massive before, and some of the potentials and limitations surrounding them.

Using BMIs, people are able to move machines, and control virtual avatars without moving a muscle. This is usually done by accessing the region of the brain responsible for a specific movement and then decoding that electrical signal into something a computer can understand. One area that was still hard to decode, however, was speech itself.

But now, scientists from the University of California in San Francisco have now reported a way to translate human brain activity directly into text.

Joseph Makin and their team used recent advances in a type of algorithm that deciphers and translates one computer language into another (one that is the basis for a lot of human language translation software). Based on those improvements in the software, the scientists designed a BMI that is able to translate a full sentence worth of brain activity into an actual written sentence.

Four participants, who already had brain implants for treating seizures, trained this computer algorithm by reading sentences out loud for about 30 minutes while the implants recorded their brain activity. The algorithm is composed of a type of artificial intelligence that looks at information that needs to be in a specific order to make sense (like speech) and make predictions of what comes next.

In that sense, this AI is learning sentences, and is then able to create a representation of what regions of the brain are being activated, in what order and with what intensity, to create that sentence. This is the encoder part of the BMI.

The encoder is followed by a different AI that is able to understand that computer-generated representation and translate that into text – the decoder. This encoder-decoder duo is doing for speech what other BMIs do for movement: pairing a specific set of brain signals and transforming that into something computers understand and can act on.

This interface was able to translate 30 to 50 sentences at the time, with an error rate similar to that of professional-level speech transcription. The team also ran another test in which they trained the BMI on the speech from one person before training on another participant. This increased the accuracy of the overall translation, showing that the whole algorithm could be used and improved by multiple people. Lastly, based on the information gathered by the brain implants, the study was also able to expand our knowledge of how very specific areas of the brain are activated when we speak.

Lines of computer code
Markus Spiske on Unsplash

In the realm of BMIs, the ideal is always to be able to get a single brain signal and translate directly into computer code, reducing any intermediary steps. However, for most BMIs, including speech, that is a huge challenge. The speech BMIs available before this study were only able to distinguish small chunks of speech, like individual vowels and consonants – and even then with an accuracy of only about 40%.

One of the reasons why this new BMI is more efficient than past attempts is a shift of focus. Instead of small chunks of speech, they focused on entire words. So instead of having to distinguish between specific sounds – like “Hell”, “o”, Th”, “i”, “a,” and “go,” the machine can use full words – “Hello” and “Thiago” – to understand the difference between them.

Although the best case scenario would be to train the algorithm with the full scope of the English language, for this study the authors constrain the available vocabulary to 250 different words. Maybe not enough to cover the complete works of Shakespeare, but definitely an improvement from virtually most BMIs. Most BMIs currently use some form of virtual keyboards, with the person moving a virtual cursor with their minds and “typing” in this keyboard, one digit at the time.

A neon sign in the shape of a speech bubble that says "hello"
Adam Solomon on Unsplash

There is a pretty glaring difference between reading brain activity from a brain implant and anything we could do on a larger scale. However, this study opens up fascinating new directions. These implants were trained on about 30 minutes of speech, but the implants will still be there. By continuously examining data, scientists might be able to create a library worth of training sets for BMIs, which as shown, could then be translated to someone else. There is also the possibility of expanding this study to different languages, which would teach us more how speech and its representations in the brain might vary across languages.

Much more research will be needed to transform this technology into something that we could all use. It’s likely that this technology will first used to improve the lives of paralyzed patients, and for other clinical applications. There, the benefits are immense, enabling communication with speed and accuracy we have not been yet been able to achieve.

Whether this becomes another function of Siri or Alexa will depend mostly on what advances we see in capturing brain activity, especially if we’re able to do so without brain implants.

https://www.forbes.com/sites/davidphelan/2020/04/12/exciting-new-apple-iphone-software-feature-revealed-in-new-report/#75d75c8c63f4

Exciting New Apple iPhone Feature Revealed, Report Claims

The Apple iPhone 12 may come with a redesigned iOS, with wider use of widgets in the home screen alongside regular app icons and – even more excitingly – a whole new way to use the Control Center.


Today In:
Consumer TechFilipe Espósito reported for 9to5Mac that the widget capability is expected in iOS 14, but a new rumor suggests the story is more complicated and that an even more intriguing feature is on its way as part of the same time.

Apple’s next iPhone software, iOS 14, is expected to be revealed in June when the online-only World Wide Developers Conference takes place. Thanks to several reports from 9to5Mac, which has seen an early build of iOS 14, many of the coolest features have already leaked out, such as a new way to use apps, and seven upgrades to the next Apple Watch.

Widgets, which is one of those words which gets weirder the more you write it, are already available on iPhones. Widgets are those icons which do more than a regular app icon, dynamically adjusting automatically. A weather widget is a simple example. Where the Weather app icon on the iPhone merely shows that sun peeping hopefully out from behind a cloud, the Weather widget has up-to-date meteorological information (though this isn’t always as optimistic as the app icon).

By the way, there are also app icons which change as well, but very few: the Calendar app icon always shows the right date and the Clock app is the only Apple dynamic icon, displaying the correct time, even down to a moving second hand.

But on the iPhone, you can only get to widgets through the Today view, found by swiping left from the first page of apps, for instance.

So, to have widgets sitting on the Home Screen alongside regular app icons would be a very big change for Apple to make. Not to mention an exciting one. Internally, this feature has its own name: Avocado.

However, before you start holding your breath in anticipation, a new report has suggested that things may not be as far advanced as previously thought.

According to Jon Prosser, he of Front Page Tech and a tipster who is on fire with solid Apple rumors on topics as diverse as iPhone 12 and Apple AirPower just now, the arrival of Avocado may be further off.

Well, that last line says iOS 15 is more probable, and iOS 15 is over a year away. It’ll be a shame if it’s that far off, but don’t give up hope. Prosser’s next tweet is thankfully more cautious.

Okay, let’s hope so.

But it’s one line in the first tweet that properly intrigues me: Third-party widgets in Control Center also an option.

That’s very exciting.

Control Center, as you’ll know, is that gray screen of widgets that appears when you swipe your finger down the home screen from the top right. It has direct switches to adjust or toggle, like the torch and screen brightness, alongside more advanced panels. A quick touch to the Wi-fi icon turns it on or off while a long-press opens the connectivity panel. And that’s not the end of it. Long-press the Wi-fi button here and it opens the menu of accessible networks.

It’s very useful.

So, just imagine how helpful it would be if you could add other widgets to Control Center, beyond Apple’s choices. Your most-used widgets from third-parties could join the party.

For instance, the Apple TV widget is there now. How handy would it be if a similar widget could be there for an Amazon Fire TV box? Or a direct button for a Nest Thermostat, say? The possibilities are wide-ranging.

https://global.chinadaily.com.cn/a/202004/13/WS5e93cb0ea3105d50a3d15b45.html

Chinese firm YMTC unveils 128-layer flash memory chip

By Ma Si | chinadaily.com.cn | Updated: 2020-04-13 10:14

Yangtze Memory Technologies Co Ltd introduces its 128-layer flash memory chip on Monday. [Photo provided to China Daily]

Chinese chip maker Yangtze Memory Technologies Co Ltd officially introduced its 128-layer flash memory chip on Monday, marking a key step in the country’s efforts to grow its homegrown semiconductor sector.

YMTC said its 128-layer flash memory 3D NAND chips, a type of high-end non-volatile memory chip, has passed sample verification on the solid-state drives platform through co-working with multiple controller partners.

The chip, named X2-6070, has achieved the highest bit density, and highest capacity so far among all 3D NAND flash memory products on the market currently, YMTC claimed.

Grace Gong, senior vice-president of marketing and sales at YMTC, said in a statement that “We are able to achieve these results today because of the incredible synergy created through seamless collaboration with our global industry partners, as well as remarkable contributions from our employees.”

“With the launch of Xtacking 2.0, YMTC is now capable of building a new business ecosystem where our partners can play to their strengths and we can achieve mutually beneficial results,” Gong added.

Xtacking is the company’s in-house developed chip architecture, and it is the basis for the company’s high-end 64-layer flash memory chips, which entered volume production last year.

In its 128-layer line of products, Xtacking has been upgraded to version 2.0, which is bringing more benefits to flash memory, the company said.

“This product will first be applied to consumer-grade solid-state drives and will eventually be extended into enterprise-class servers and data centers in order to meet the diverse data storage needs of the 5G and AI era,” Gong added.

YMTC is a unit of Chinese semiconductor giant Tsinghua Unigroup, and it is part of Chinese company’s key push to reduce heavy reliance on foreign semiconductor industry.

https://www.theregister.co.uk/2020/04/13/ai_roundup/

Google Cloud’s AI recog code ‘biased’ against black people – and more from ML land

Including: Yes, that nightmare smart toilet that photographs you mid… er, process

Roundup Here’s your latest summary of recent machine-learning developments.

Google Cloud’s computer vision algos are accused of being biased: An experiment probing Google’s commercial image recognition models, via its Vision API, revealed the possibly biased nature of its training data.

Algorithm Watch fed an image of someone with dark skin holding a temperature gun into the API, and the object was labelled as a “gun.” But when a photo of someone with fair skin holding the same object was fed into the cloud service, the temperature gun was recognized as an “electronic device.”

To verify the difference in labeling was caused by the difference in skin color, the experiment was repeated with the image of the darker-skinned person tinted using a salmon-colored overlay. Google Cloud’s Vision API said the temperature gun in the altered picture was, bizarrely, a “monocular.”

Tracy Frey, director of product strategy and operations at Google, apologized, and called the results “unacceptable,” though denied the mistake was down to “systemic bias related to skin tone”.

“Our investigation found some objects were mis-labeled as firearms and these results existed across a range of skin tones. We have adjusted the confidence scores to more accurately return labels when a firearm is in a photograph,” she told Algorithm Watch.

Intel and Georgia Tech win four-year DARPA AI contract: Research has shown adversarial examples fool machine-learning systems into making wrong decisions – such as mistaking toasters for bananas and vice-versa – by confusing them with maliciously crafted data. So far, adversarial examples that hoodwink a particular AI system will fail to trick any other AI, even if they are similar, due to their narrow nature.

However, DARPA set up the Guaranteeing Artificial Intelligence Robustness against Deception (GARD) program to fund research into adversarial examples that are able to fool multiple similar machine-learning systems at once. Now, researchers at Intel and the Georgia Institute of Technology are leading that effort.

“Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning models,” it announced this week.

The eggheads will look at how to make machine-learning models more robust against adversarial attacks in realistic settings for “potential future attacks.”

UCSD to analyze coronavirus lung scans with machine learning: AI researchers are studying chest X-rays to look for telltale signs of pneumonia associated with the COVID-19 coronavirus at the University of California, San Diego.

It’s unclear whether COVID-19 lung infections are particularly distinctive compared to other diseases, so it’s difficult to use machine learning as a diagnostic tool.

But researchers at UCSD can help combat the disease by suggesting patients with early signs of pneumonia be tested for COVID-19. “Patients may present with fever, cough, shortness of breath, or loss of smell,” Albert Hsiao, an associate professor of radiology at UCSD’s School of Medicine, told The Register.

“Depending on the criteria, they may or may not be eligible for RT-PCR testing for COVID-19. False negative rate on RT-PCR is estimated around 70 per cent in some studies, so it can be falsely reassuring.

“However, if we see signs of COVID-19 pneumonia on chest x-ray, which may be picked up by the AI algorithm, we may decide to test patients with RT-PCR who have not yet been tested, or re-test patients who have had a negative RT-PCR test already. Some patients have required 4 or more RT-PCR tests before they ultimately turn positive, even when x-ray or CT already show findings”

You can read more about that study here.

You don’t want to know what this AI smart toilet uses to identify its users: Get this, a team of researchers at Stanford University have built a smart toilet packed full of cameras and sensors to analyze your urine and stool samples with machine learning algorithms.

If that’s not bad enough, this high-tech lav is able to capture the data and link it back to the right user by identifying people with pictures of their, er, let’s just refer to the study’s abstract.

“Each user of the toilet is identified through their fingerprint and the distinctive features of their anoderm, and the data are securely stored and analysed in an encrypted cloud server.”

Sod it, if you haven’t worked out what that means, it’s a picture of your butthole. Yes, as soon as you sit down to do your business on this toilet, any data will be stored under the right user profile because there might be multiple different people that use the same toilet in your house. All of that will be sent to a cloud server somewhere and stored so that you can monitor the health of your bowels.

“We know it seems weird, but as it turns out, your anal print is unique,” said Sanjiv Gambhir, a radiology professor at Stanford University, who led the study. “The scans — both finger and nonfinger — are used purely as a recognition system to match users to their specific data. No one, not you or your doctor, will see the scans.”

He believes that the AI toilet might help people with irritable bowel syndrome, prostate cancer, or kidney failure. Seung-Min Park, a senior research scientist at Stanford University’s School of Medicine, told El Reg that they aim to sell their odd contraption for “a few hundreds of dollars”.

The researchers are already working “version 2” to offer more features like measuring glucose levels in urine or blood in stool samples.

https://www.roadtovr.com/unity-webxr-exporter-plugin-mozilla-update/

Mozilla Updates the Unity WebXR Exporter to Run VR Apps in the Browser

WebXR is an open standard which allows VR apps to run directly from web browsers. While the tools for building WebXR apps are designed to be familiar to web developers, many VR developers use game engine tools like Unity to build their apps. With the Unity WebXR Exporter, developers now have the option of targeting browsers as their publishing platform, making their app easily accessible on the web.

WebXR is pretty magical. It makes it possible to create headset-agnostic VR experiences that can be accessed as easily as clicking a link. Take Moon Rider, for instance, a web-based VR rhythm game. Or how about Mozilla Hubs, a social VR chatroom that allows people with and without headsets to chat, draw, and share.

‘Moon Rider’ is a WebVR Game That’s Quietly Amassed Thousands of Daily Players

As neat as WebXR is, the tools to build this kind of content are still evolving. While a frameworks A-frame is a great starting point, it appeals more to web developers (being based on HTML) than game developers (who are used to working in game engines).

Unity is one of the most popular game engines for building VR content, including some of the biggest VR games out there like Beat Saber.

Luckily, the Mozilla’s free Unity WebXR Exporter makes it easy for game developers already using the engine to build WebXR apps. The tool has actually been around for some time, but hadn’t been updated since 2018 as the earlier ‘WebVR’ standard evolved into the newer ‘WebXR’ standard. Now Mozilla has released a revamped version of the tool that’s ripe and ready for WebXR.

Mozilla detailed the updated Unity WebXR Exporter on its blog, including pointing to the open-source of the tool and updated documentation on GitHub and a published demo scene.

The company says that the Unity WebXR Exporter supports Unity 2018.4 (LTS) and all versions of Unity 2019. Support for Unity 2020 is “planned once the new Unity APIs settle down.”

Because WebXR apps can be visited from virtually any device, Mozilla recommends developers build WebXR apps in Unity using the Universal Render Pipeline (previously known as the Lightweight Render Pipeline) to maintain high performance.

http://news.mit.edu/2020/gamma-radiation-found-ineffective-in-sterilizing-n95-masks-0410

Gamma radiation found ineffective in sterilizing N95 masks

Nuclear scientists and biomedical researchers team up to investigate whether treatment with gamma radiation could make N95 masks more reusable.

Leda Zimmerman | Department of Nuclear Science and Engineering
April 10, 2020

The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.

In mid-March, members of the Department of Nuclear Science and Engineering (NSE) joined forces with colleagues in Boston’s medical community to answer a question of critical importance during the Covid-19 pandemic: Can gamma irradiation sterilize disposable N95 masks without diminishing the masks’ effectiveness?

This type of personal protective equipment (PPE), which offers protection against infectious particles like coronavirus-laden aerosols, is in desperately short supply worldwide, and medical professionals in Covid-19 hotspots are already rationing the masks. Gamma radiation is commonly used to sterilize hospital foods and equipment surfaces, as well as much of the public’s food supply, and there has been significant interest in determining if it could allow N95 masks to be reused and address the expanding scarcity.

In a study uploaded on March 28 to medRχiv, the preprint server for health sciences, researchers announced their results: N95 masks subjected to cobalt-60 gamma irradiation for sterilization pass a qualitative fit test but lose a significant degree of filtration efficiency. This form of sterilization compromises the masks’ ability to protect medical providers from Covid-19.

The study, NSE’s first research effort related to the pandemic, also drew on the expertise of MIT’s Office of Environment, Health, and Safety.

“One of our students thought gamma irradiation might be a cool solution to a big problem, and I really wanted it to work,” says Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering, one of the study’s coauthors. “But we quickly recognized that the data went against the hypothesis.”

Team members believe these negative results nevertheless contribute to the larger effort to combat the pandemic. “There has never been a time when negative results are more significant,” notes study lead and co-author Avilash Cramer SM ’18, a fifth-year doctoral candidate in the Harvard-MIT Program in Health Sciences and Technology studying radiation physics. “Publishing as quickly as we can means that others working on the same problem can direct their energies in different directions.”

Fast-track research

While they may not have produced the desired outcome, the researchers nevertheless pulled off a study remarkable for its speed and multidisciplinary cooperation — a process inspired and shaped by the immediate threat of the Covid-19 pandemic. “The study took nine days from start to finish,” says Short. “It was the fastest I’ve ever done anything, by orders of magnitude.”

The dire reality of an N95 shortage in the United States sparked widespread concerns early in March. “It had already hit New York, and was on its way to Massachusetts, and President [L. Rafel] Reif wanted to know if we could do something to masks to permit their reuse,” recounts Short. “We looked into different methods, and noticed the idea of using gamma radiation was popping up in a lot of places.”

Cramer was losing sleep worrying about his classmates, medical residents at Boston-area hospitals already in the thick of treating Covid-19 patients. “After reading the literature, it was clear there wasn’t a lot of good research out there regarding reusing masks,” he says. “The sky was falling in hospitals with equipment shortages everywhere, and while others had shown gamma rays could inactivate viruses, I wanted to demonstrate one way or the other if they damage the masks themselves.”

N95 masks are manufactured through a variety of proprietary processes using wool, glass particles, and plastics, with 1-2 percent copper and/or zinc. Viewed under a scanning electron microscope, these masks reveal a matrix of fibers with openings of approximately 1 micron. Because the filtering occurs through an electrostatic, rather than mechanical, process, a mask can repel or trap smaller incoming particles. This includes at least 95 percent of airborne particles 0.3 microns or larger in size, such as the airborne droplets that can convey the Covid-19 virus.

A call for multidisciplinary action

On March 11, Cramer emailed several contacts in the radiation physics community in search of a gamma irradiation source. Among the group was Short, who has some experience, among many things, in irradiating plastics. Cramer had worked with Short on previous research ventures, and was familiar with NSE from his time serving as a teaching assistant for an NSE class, Radiation Biophysics (22.055), taught by his PhD advisor, Rajiv Gupta, a physician at Massachusetts General Hospital and an associate professor of radiology at Harvard Medical School.

Short instantly responded to Cramer, offering the campus Cobalt-60 irradiation facility, a source of gamma radiation. “I had an exemption to work on campus and thought, let’s just do it: irradiate and sterilize the masks, then see if they can be used again,” says Short.

With support and guidance from Gupta, also a study co-author, Cramer paused his doctoral work (on low-cost radiology solutions for rural areas), and began writing up a research protocol and drafting additional researchers.

The experiment began on Saturday, March 14, and the first results emerged the next Thursday.

Short gathered the masks from his and a collaborator’s laboratory, keeping a handful for this study before donating the rest (a few hundred) to Beverly Hospital. In Building 6, Short and Mitchell Galanek of MIT Environmental Health and Safety placed the masks into the shielded ring of Cobalt-60, subjecting one group of masks to 10 kilograys (kGy) and another to 50 kGy of gamma radiation (A kilogray is a unit of ionizing radiation). One control group of masks was left unirradiated.

Short then biked the masks to Brigham and Women’s Hospital. There, resident and study co-author Sherry H. Yu, who had signed onto the study after receiving a single emailed invitation, carried out a series of qualitative fit tests. These tests, designed by the U.S. Occupational Safety and Health Administration, establish whether a mask fits securely to someone’s face and screens out potentially harmful aerosolized particles. Yu’s N95 mask-wearing guinea pig was Short himself.

“I spent three hours in a back room at the Brigham in the midst of Covid craziness trying to taste a nebulized sugar solution,” says Short. For this test, saccharin vapor is sprayed into a hood and collar assembly fitted over the head of a subject wearing an N95 mask. By moving their face from side to side and reading a passage, the subject simulates facial movements that might displace or detach the mask and render it less effective. If, after all these motions, a subject cannot taste the sweet mist, the N95 passes. All of Short’s gamma irradiated masks passed the qualitative fit test.

“We thought, Awesome, we’ve done it,” recalls Short. “But colleagues from the Greater Boston biomedical community told us the fit test wasn’t good enough — we needed to assess filter efficiency as well.”

Flawed filtering

Fortunately, the right kind of experimental setup existed just next door at MIT — in the laboratory of Ju Li, Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering. Li and doctoral student Enze Tian (both study coauthors) signed on to shepherd the next phase of the study, using an apparatus that shoots sodium chloride particles of different sizes into the N95 masks. The device, normally used to test the protective properties of the Li lab’s masks against tiny metal fragments and nanoparticles, revealed the disappointing results.

“The sterilized masks lost two-thirds of their filtering efficiency, essentially turning N95 into N30 masks,” says Cramer. But why the deterioration?

“Our hypothesis is that ionizing radiation of whatever kind likely decharges the electrostatic filtration of the mask,” says Gupta. “The mechanical filtration of gauze can trap some particles, but radiation interferes with the electrostatic filter’s ability to repel or capture particles of 0.3 microns.”

Gupta is nevertheless pleased by the study’s results. “Even with lowered efficiency, these N95 masks are much better than the surgical masks we use,” he says. “Instead of throwing out N95 masks, they could be sterilized and used as N30 masks for the kind of procedures I do all day long.”

Cramer, who is continuing to explore other N95 mask sterilization methods, believes the study’s results serve a larger purpose: “Adding one more data point to the global understanding of how to clean devices is important — it’s the purest example of the scientific method I’ve ever had the fortune to be part of.”

“Every piece of our hastily assembled machine worked perfectly,” says Short. “We demonstrated that when a crisis hits, scientists can come together for the greater good and do what needs to happen.”

https://www.androidpolice.com/2020/04/13/google-assistant-adds-native-support-for-tvs-set-top-boxes-and-media-remotes/

Google Assistant adds native support for TVs, set-top boxes, and media remotes

Google Assistant already supports more than 60 device types, from smart lights and thermostats to more eccentric ones like dehydrators, pergolas, or fireplaces. Now it’s officially adding three new ones: TVsmedia remotes, and set-top boxes.

You may be thinking that Assistant has already been able to control these for years, and you’re right. Support for the Nvidia ShieldLogitech Harmony remotesDISH Hopper DVRsAndroid TVsSling box, and more has been rolling out since 2018. The only new thing here is that the documentation has finally been added so that any device maker could check it and know how to implement this properly. This isn’t the first time Google has allowed some devices to be added before their type was officially documented (Nest Secure before security systems, August Lock before smart locks, etc…), so it doesn’t surprise us.

For all three device types, Google offers several common traits: devs can choose to let users select appschange input methodscontrol playbackset volume, and obviously turn the device on/off, all with voice commands. Hopefully, these controls will soon be accessible within the Google Home app as well. Currently, my Shield and Harmony remotes don’t allow for any real controls beside on/off from the Home app, despite being recognized properly as TVs and remotes. However, on my Lenovo Smart Display, I can see a better UI for each, with plenty of controls. It’s an inconsistent experience and I always wonder why the app doesn’t have it.

https://www.forbes.com/sites/gordonkelly/2020/04/12/google-chrome-81-tab-groups-tab-management-update-chrome-browser/#78030a23404e

Google Just Gave Millions Of Users A Reason To Keep Chrome

Google has giving Chrome users plenty of reasons to quit its browser recently, including controversial changessecurity problemsdata concerns and rivals offering greater privacy. But now Google has introduced a great reason to stay.

MORE FROM FORBESGoogle Chrome 80 Released With Controversial Deep Linking Upgrade 

After a surprise release U-turn last month, Google has now rolled out Chrome 81 and it brings a wide roll-out of ‘Tab Groups’, the company’s biggest change to how Chrome tabs work since the browser launched 11 years ago.

As the name suggests, Tab Groups is a new way to organize the mass of tabs you have open at any one time in Chrome and, not only is it ingenious, it couldn’t be simpler to use:

  • Right click/double-tap on any tab
  • Select ‘Add to new group’
  • Drag associated tabs into that group

From here, there are plenty of customisation options. Click on the group header (a colored dot is used by default) to customize the group name (tip: keep them short to avoid wasting valuable tab space), change the group color, ungroup tabs or close all tabs in the group. Once you create your first group, you can also right click on any tab to ungroup it or move it to any of your existing groups.

Like all the best ideas, Tab Groups is wonderfully simple and it will change the way you use Chrome. Notably, you will no longer need to keep separate browser windows open for different projects, they can all coexist in a single window because each project is clearly defined. This alone will save you 100s of wasted clicks a day.

Needless to say, tab management in Chrome is long overdue and numerous third-party extensions have sprung up to fill the gap over the years. That said, while no one solution will suit all users, Google’s approach is the best I’ve seen. Furthermore, while I’m also a user of other Chromium-based browsers like Brave, for now, this feature is only found in Google Chrome.

Chrome 81 is rolling out for Windows, Mac and Linux right now. If for any reason you don’t see Tab Groups after updating (Help > About), you can manually enable them by using this flag in the Chrome address bar: chrome://flags/#tab-groups

So, if you haven’t tried Tab Groups yet, I urge you to do it right now.

I am an experienced freelance technology journalist. I have written for Wired, The Next Web, TrustedReviews, The Guardian and the BBC in addition to Forbes.

https://scitechdaily.com/nerve-agent-antidote-under-development-to-protect-soldiers-and-public/

Nerve Agent Antidote Under Development to Protect Soldiers and Public

Rhodium Nerve Agent Antidote

Southwest Research Institute has received funding from the Medical CBRN Defense Consortium (MCDC) administered by Advanced Technology International to develop a nerve agent antidote for emergency use on the battlefield or to protect public health.

The use of nerve agents continues to be a significant threat to both military and civilian populations. This prototype medication could serve as a countermeasure against a nerve agent attack. SwRI will lead the development of the antidote under the $9.9 million, five-year program, and will collaborate with University of Pittsburgh on the synthesis and compound design, through the support of the Defense Threat Reduction Agency (DTRA).

“This antidote improves on the current standard of care, importantly, its ability to reverse the effects of the toxin in the central nervous system,” said SwRI’s Dr. Jonathan Bohmann, a principal scientist in SwRI’s Pharmaceutical and Bioengineering Department. “The antidote will eventually be administered through an autoinjector, which allows for rapid and effective treatment in the field. It would work much like an Epinephrine Auto-Injector or EpiPen® administered during a severe allergy attack. The initial goal of the project is to support our warfighters; however, this treatment could also eventually be administered for civilian use.”

In the design of the new medication, SwRI will use a computer-based drug design software platform called Rhodium™. SwRI developed Rhodium™, a proprietary docking simulation software program tool, to enhance drug design and safety while reducing costs and speeding up development time. The Institute offers Rhodium as a service to clients.

SwRI is one of 193 industry, government and nonprofit organizations supporting the medical countermeasures sector in MCDC. This sector was founded to support U.S. Department of Defense needs in areas of infectious diseases, chemical threats and other medical countermeasures for military personnel.

SwRI’s Chemistry and Chemical Engineering Division is ISO 9001:2015 certified, meeting international quality standards for product development from initial design through production and service. SwRI scientists supports drug development from discovery to clinical trials in, FDA-inspected Current Good Manufacturing Practice facilities.

Rhodium supports drug development and screening for antibiotics as well as preventative treatments such as vaccines. The software also predicts adverse drug reactions and side effects.

https://futurism.com/the-worlds-first-cyborgs-humanitys-next-evolutionary-phase-is-here

The World’s First Cyborgs: Humanity’s Next Evolutionary Phase Is Here

It’s the stuff of sci-fi films.

ALICE GRECZYN

 

In a small dark experiment room, Bill, a wheel-chair bound tetraplegic, stares intently at a simulated arm on a computer screen.  Two tentacle-like cables protrude from his skull and hook into a nearby computer, which sends messages to electrodes implanted in his arm and hand. If the experiment is successful, Bill will move his limbs again.

This early scene from Futurism’s newly released documentary I AM HUMAN sets the stage for jaw-dropping revelations to come. With this technology, Bill may someday be able to move other things with his brain signals. You know, telekinesis. Welcome to the future.

Though Bill doesn’t resemble the cyborgs we’re used to seeing in movies, the image is just as compelling, and representative of a much larger real-world phenomenon. In fact, Bill is one of many first wave pioneers ushering in a biotechnological revolution–presently, more than 200,000 people in the world have digital chip technology implanted in their brains.

Most of these people are Parkinson’s patients, who undergo deep brain stimulation (DBS) surgery with the hope of ameliorating tremor and other symptoms. DBS has been a course of treatment for decades now, and opened the door for further trials into brain implants for a host of other ailments, including obesity, addiction, obsessive compulsive disorder, and depression.

In the film, we see firsthand the dramatic impact of this technology on Anne, an artist struggling with Parkinson’s. We also meet Stephen, a blind retiree who undergoes a retinal prosthesis surgery (in sci-fi fashion, the implant is known as the Argus II.) As one of several hundred blind patients with “bionic vision,”  the technology currently only offers the ability to see outlines and edges of objects. Progress however, is unbounded – and accelerating. Within a few years, greater definition and infrared, heat-mapping vision will be just a software upgrade away.

For Bill, Anne, and Stephen – three ordinary people robbed of basic function – risking their brains is a brave effort to preserve their humanity – but their decisions mark thrilling implications for us all. What happens when anyone can upgrade their bodies? What aspects of our humanity will we change? Who will decide who goes forth into our species’ next evolutionary phase, and who gets left behind?

You can imagine scientists, investors, and ethicists have quite a debate on their hands. While I AM HUMAN acknowledges concerns about “playing God,” it challenges fear-driven narratives surrounding human-machine evolution with unflinching optimism, grounded in the real-life stories of people whose lives may directly benefit from such scientific breakthroughs.

Watch I AM HUMAN now on your favorite streaming platform.