Advanced brain organoid could model strokes, screen drugs

Functional blood brain barrier allows for discovering and testing new drugs that can cross over into the brain
May 29, 2018

These four marker proteins (top row) are involved in controlling entry of molecules into the brain via the blood brain barrier. Here, the scientists illustrate one form of damage to the blood brain barrier in ischemic stroke conditions, as revealed by changes (bottom row) in these markers. (credit:WFIRM)

Wake Forest Institute for Regenerative Medicine (WFIRM) scientists have developed a 3-D brain organoid (tiny artifical organ) that could have potential applications in drug discovery and disease modeling.

The scientists say this is the first engineered tissue-equivalent to closely resemble normal human brain anatomy — containing all six major cell types found in normal organs, including neurons and immune cells.

The advanced 3-D organoids promote the formation of a fully cell-based, natural, and functional version of the blood brain barrier (a semipermeable membrane that separates the circulating blood from the brain, protecting it from foreign substances that could cause injury).

The new artificial organ model can help improve understanding of disease mechanisms at the blood brain barrier (BBB), the passage of drugs through the barrier, and the effects of drugs once they cross the barrier.

Faster drug discovery and screening

The shortage of effective therapies and the low success rate of investigational drugs are (in part) due to the fact that we do not have human-like tissue models for testing, according to senior author Anthony Atala, M.D., director of WFIRM. “The development of tissue-engineered 3D brain tissue equivalents such as these can help advance the science toward better treatments and improve patients’ lives,” he said.

The development of the model opens the door to speedier drug discovery and screening. This applies both to neurological conditions and for diseases like HIV, where pathogens hide in the brain; and to disease modeling of neurological conditions, such as Alzheimer’s disease, multiple sclerosis and Parkinson’s disease. The goal is to better understand their pathways and progression.

“To date, most in vitro [lab] BBB models [only] utilize endothelial cells, pericytes and astrocytes,” the researchers note in a paper. “We report a 3D spheroid model of the BBB comprising all major cell types, including neurons, microglia, and oligodendrocytes, to recapitulate more closely normal human brain tissue.”

So far, the researchers have used the brain organoids to measure the effects of (mimicked) strokes on impairment of the blood brain barrier, and have successfully tested permeability (ability of molecules to  pass through the BBB) of large and small molecules.

Reference: Nature Scientific Reports (open access). Source: Wake Forest Institute for Regenerative Medicine.

Mozilla Firefox joins Chrome, Safari in making it easier to build sophisticated websites

You may not care about web components, but you’ll like what they can do for the web.

With Mozilla’s flip of a virtual switch, life got easier for the people who make websites and the people who use them, which is to say, everybody.

On Monday, Mozilla accepted an update for its Firefox browser that enables technology called web components. You probably won’t directly care about them unless you’re a programmer. But you’ll almost assuredly care about what they mean for intricate websites: fewer problems, faster loading and quicker improvements.

Google’s Chrome team started pushing web components more than five years ago. But browser makers only gradually embraced the two big pieces, called Shadow DOM and Custom Elements. Shadow DOM makes it possible to isolate chunks of code so they don’t disturb other parts of website software, while Custom Elements let programmers create their own custom website foundations.

Chrome was the first to support web components, but Apple’s Safari followed suit in 2016 and 2017Microsoft has pledged to add support to its Edge browser but hasn’t done so yet. Firefox supports Custom Elements, but on Monday, Shadow DOM support arrived in the Nightly test version.

Web components are overkill for basic websites. But more advanced sites can benefit, however, and some big ones such as YouTube already use web components. If you visit a website with a browser that doesn’t support web components, it’ll likely be slower or limited.

“Web development got super hard,” said Mozilla Chief Product Officer Mark Mayo. “It’s now going to be a lot easier, so we should see better, faster web pages.”

Firefox Chief Product Officer Mark Mayo
Firefox Chief Product Officer Mark Mayo

Stephen Shankland/CNET

Web components only work in the Nightly test version of Firefox for now, but they’re scheduled to arrive in the main version of the browser in September. They join a host of other developer-focused Firefox improvements arriving this year that Mozilla is using to try to restore its cachet with the web programmers who were instrumental to the browser’s rise a decade ago.

With web components, developers can create website building blocks and then widely reuse them without worrying they’ll cause problems that’ll stop you from actually using that website. One example: Websites often have tabs to visually represent different sections, and web components let developers more easily create that interface, reuse it on another project or even copy it from other websites that already have figured it out.

“For big companies with many teams and complex products, it’s huge,” said Alex Russell, a senior programmer at Chrome who’s worked for years to modernize the web.

Web components technology particularly helps with big libraries of pre-written software called frameworks, which are widely used in today’s web programming. Frameworks, like React from Facebook and Angular from Google, make it easier to build websites, but parts of one framework can’t be used with parts of another. As a result, programming on the web is “balkanized,” Russell said.

Mozilla’s Mayo sees it as a big step forward, too.

“It’s the basis of a safer, faster, more productive development model for the web,” Mayo said. “You don’t get all three of those being advanced at once very often.”

Environmental noise paradoxically preserves the coherence of a quantum system

May 30, 2018, RIKEN

Quantum computers promise to advance certain areas of complex computing. One of the roadblocks to their development, however, is the fact that quantum phenomena, which take place at the level of atomic particles, can be severely affected by environmental “noise” from their surroundings. In the past, scientists have tried to maintain the coherence of the systems by cooling them to very low temperatures, for example, but challenges remain. Now, in research published in Nature Communications, scientists from the RIKEN Center for Emergent Matter Science and collaborators have used dephasing to maintain quantum coherence in a three-particle system. Normally, dephasing causes decoherence in quantum systems.

Quantum phenomena are generally restricted to the atomic level, but there are cases—such as laser light and superconductivity—in which the  of  allows them to be expressed at the macroscopic level. This is important for the development of quantum computers. However, they are also extremely sensitive to the environment, which destroys the coherence that makes them meaningful.

The group, led by Seigo Tarucha of the RIKEN Center for Emergent Matter Science, set up a system of three quantum dots in which  could be individually controlled with an electric field. They began with two entangled electron spins in one of the end quantum dots, while keeping the center dot empty, and transferred one of these spins to the center dot. They then swapped the center dot spin with a third spin in the other end dot using electric pulses, so that the third spin was now entangled with the first. The entanglement was stronger than expected, and based on simulations, the researchers realized that the  around the system was, paradoxically, helping the entanglement to form.

According to Takashi Nakajima, the first author of the study, “We discovered that this derives from a phenomenon known as the ‘quantum Zeno paradox,’ or ‘Turing paradox,’ which means that we can slow down a quantum system by the mere act of observing it frequently. This is interesting, as it leads to environmental noise, which normally makes a system incoherent, Here, it made the system more coherent.”

Tarucha, the leader of the team, says, “This is a very exciting finding, as it could potentially help to accelerate research into scaling up semiconductor quantum computers, allowing us to solve scientific problems that are very tough on conventional  systems.”

Nakajima says, “Another area that is very interesting to me is that a number of biological systems, such as photosynthesis, that operate within a very noisy environment take advantage of macroscopic  coherence, and it is interesting to ponder if a similar process may be taking place.”

 Explore further: Researchers create a quantum entanglement between two physically separated ultra-cold atomic clouds

More information: Takashi Nakajima et al, Coherent transfer of electron spin correlations assisted by dephasing noise, Nature Communications (2018). DOI: 10.1038/s41467-018-04544-7

Read more at:

Garbage In, Garbage Out: machine learning has not repealed the iron law of computer science

Pete Warden writes convincingly about computer scientists’ focus on improving machine learning algorithms, to the exclusion of improving the training data that the algorithms interpret, and how that focus has slowed the progress of machine learning.

The problem is as old as data-processing itself: garbage in, garbage out. Assembling the large, well-labeled datasets needed to train machine learning systems is a tedious job (indeed, the whole point and promise of machine learning is to teach computers to do this work, which humans are generally not good at and do not enjoy). The shortcuts we take to produce datasets come with steep costs that are not well-understood by the industry.

For example, in order to teach a model to recognize attractive travel photos, Jetpac paid low-waged Southeast Asian workers to label pictures. These workers had a very different idea of a nice holiday than the wealthy people who would use the service they were helping to create: for them, conference reception photos of people in suits drinking wine in air-conditioned international hotels were an aspirational ideal — I imagine that for some of these people, the beach and sea connoted grueling work fishing or clearing brush, rather than relaxing on a sun-lounger.

Warden says that people who are trying to improve vision systems for drones and other robots run into problems using the industry standard Imagenet dataset, because those images were taken by humans, not drones, and humans take pictures in ways that are significanty different from the way that machines do — different lenses, framing, subjects, vantage-points, etc.

Warden’s advice is for machine learning researchers to sit with their training data: sift through it, hand-code it, review it and review it again. Do the hard, boring work of making sure that PNGs aren’t labeled as JPGs, retrieve the audio samples that were classified as “other” and listen to them to see why the classifier barfed on them.

It’s an important lesson for product design, but even more important when considering machine learning’s increasing role in adversarial uses like predictive policing, sentencing recommendations, parole decisions, lending decisions, hiring decisions, etc. These datasets are just as noisy and faulty and unfit for purpose as the datasets Warden cites, but their garbage out problem ruins peoples’ lives or gets them killed.

Here’s an example that stuck with me, from a conversation with Patrick Ball, whose NGO did a study of predictive policing. The police are more likely to discover and arrest perpetrators of domestic violence who live in row-houses, semi-detached homes and apartment buildings, because the most common way for domestic violence to come to police attention is when a neighbor phones in a complaint. Abusers who live in detached homes get away with it more than their counterparts in homes with a party wall.

Train a machine learning system with police data, and it will overpolice people in homes with shared walls (who tend to be poorer), and underpolice people in detached homes (who tend to be richer). No one benefits from that situation.

There are almost always model errors that have bigger impacts on your application’s users than the loss function captures. You should think about the worst possible outcomes ahead of time and try to engineer a backstop to the model to avoid them. This might just be a blacklist of categories you never want to predict, because the cost of a false positive is so high, or you might have a simple algorithmic set of rules to ensure that the actions taken don’t exceed some boundary parameters you’ve decided. For example, you might keep a list of swear words that you never want a text generator to output, even if they’re in the training set, because it wouldn’t be appropriate in your product.

It’s not always so obvious ahead of time what the bad outcomes might be though, so it’s essential to learn from your mistakes in the real world. One of the simplest ways to do this, once you have a half-decent product/market fit, is to use bug reports. When people use your application, and they get a result they don’t like from the model, make it easy for them to tell you. If possible get the full input to the model but if it’s sensitive data, just knowing what the bad output was can be helpful to guide your investigation. These categories can be used to choose where you gather more data, and which classes you explore to understand their current label quality. Once you have a new revision of your model, have a set of inputs that previously produced bad results and run a separate evaluation on those, in addition to the normal test set. This rogues gallery works a bit like a regression test, and gives you a way to track how well you’re improving the user experience, since a single model accuracy metric will never fully capture everything that people care about. By looking at a small number of examples that prompted a strong reaction in the past, you’ve got some independent evidence that you’re actually making things better for your users. If you can’t capture the input data to your model in these cases because it’s too sensitive, use dogfooding or internal experimentation to figure out what inputs you do have access to produce these mistakes, and substitute those in your regression set instead.

Why you need to improve your training data, and how to do it [Pete Warden]

Researchers Are Training a Robot Butler to Do the Chores You Hate in a Sims-Inspired Virtual House

Researchers are teaching machines to get stuff done using video simulations, a database of chores, and a virtual home reminiscent of your favorite time-wasting video game. The end goal? Teaching robots the same way you teach yourself how to install a toilet: instructional videos.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of Toronto, McGill University, and the University of Ljubljana released a paper detailing the methods by which they taught computers how to accomplish a greater range of activities by watching instructional videos. The researchers used simulated videos with virtual human characters, along with a database of 3,000 crowdsourced tasks the program can choose. The AI then mimics the tasks—along with everything that task entails—seen in the video.


The researchers created video simulations set in a furnished home (with a living room, kitchen, dining room, bedroom, and home office), surprisingly similar to houses in The Sims. The artificial agents would watch the videos and attempt to execute the tasks demonstrated. Researchers have so far successfully executed about 1,000 of the available crowdsourced actions.

As for learning new tricks, it’s certainly possible, “as long as the task is described as a program with a series of steps that it can understand,” according to MIT CSAIL’s Adam Conner-Simons.

Turning on the TV is easy for a human to understand, but the simple command lacks the instructions a robot would deem necessary in order to execute the task. You can’t turn on the TV if you don’t hit the power button; you can’t hit the power button unless you’re in front of it; you can’t be in front of it until you walk over to it. You get the idea.

Eventually, researchers hope to teach robots how to accomplish tasks simply by showing them actual instructional videos you might find on YouTube, for example. It also means you could eventually talk to your in-home smart speaker, instructing your Google Assistant on how exactly to dim your lights, play your tunes, and set the mood for dinner without manually entering each step.

When I asked Conner-Simons about real-world applications, I suggested a robot could help someone crack open a cold one with the boys. He said that “isn’t exactly the first use case that the team had in mind,” but the ability to move household items would be a valuable skill. “We envision that a system like this could have important implications for people with limited mobility, such as the elderly or the disabled,” he said.

But what about the lazy?

Researchers Are Training a Robot Butler to Do the Chores You Hate in a Sims-Inspired Virtual House

This AirPods wrist holder looks goofy as heck


Here’s an idea: a wristband for AirPods. It’s simple and dumb and it basically allows you to forgo your AirPods case. You can buy this accessory feat, which comes from a company called Elago, for $14.99 on Amazon. This is the same company that created the retro Mac iPhone stand. It has lots of accessory ideas.

This AirPods holder also fits over a standard Apple Watch band, so you can always carry your AirPods next to your watch. This looks better than the band by itself. You shouldn’t wear the band with just AirPods because it doesn’t look great.

I get that carrying a case around is annoying, but have some pride in yourself and don’t wear your AirPods on your wrist.

Apple’s HomePod is launching in Canada for $449 on June 18

After launching in the U.S. back in February, Apple’s HomePod is finally making its way to Canada on June 18th for $449. The smart speaker features an Apple-designed woofer that offers deep, clean bass, as well as an array of seven beam-forming tweeters that feature high frequency acoustics and directional control. The speaker also includes Apple’s A8 chip, the same processor featured in the iPhone 6 and iPhone 6 Plus. While the HomePod has been praised by critics for its acoustic calibration system and audio quality either rivalling or surpassing the Sonos One — depending on who you ask — the device has been heavily criticized for its lack of third-party service support.

This means that while speakers like the Google Home Max and Amazon’s Echo series of devices are compatible with third-party platforms like Spotify, the HomePod only works with Apple Music. Siri on the HomePod has also been criticized for its limited functionality in comparison to competing assistants like Amazon Alexa and Google Assistant. Further, the HomePod isn’t capable of simple tasks that are possible with other competing smart home speakers, including the inability to place phone calls directly from the speaker, set multiple timers at once, and perhaps most significantly, distinguishing between multiple user’s voices.

Apple says that Canadian French language support is coming as a free software update to the HomePod later this year. Calendar support is also coming to Canada, France and Germany later this year, according to the tech giant. Along with a Canadian release date for the HomePod, Apple has also revealed new iOS 11.4 features, including AirPlay 2 muti-room audio support.

Bye, Chrome: Why I’m switching to Firefox and you should too

The time has come.

Bye, Chrome: Why I’m switching to Firefox and you should too

You’re probably sick of hearing about data and privacy by now–especially because, if you live in the United States, you might feel like there’s very little you can do to protect yourself from giant corporations feeding off your time, interests, and personal information.

So how do you walk the line between taking advantage of the internet’s many benefits while protecting yourself from the corporate interests that aim to use your data for gain? This is the push-and-pull I’ve had with myself over the past year, as I’ve grappled with the revelations that Cambridge Analytica has the personal data of more than 50 million Americans, courtesy of Facebook, and used it to manipulate people in the 2016 elections. I’ve watched companies shut down their European branches because Europe’s data privacy regulations invalidate their business models. And given the number of data breaches that have occurred over the past decade, there’s a good chance that malicious hackers have my info–and if they don’t, it’s only a matter of time.

[Screenshot: Mozilla]

While the amount of data about me may not have caused harm in my life yet–as far as I know–I don’t want to be the victim of monopolistic internet oligarchs as they continue to cash in on surveillance-based business models. What’s a concerned citizen of the internet to do? Here’s one no-brainer: Stop using Chrome and switch to Firefox.

Google already runs a lot of my online life–it’s my email, my calendar, my go-to map, and all my documents. I use Duck Duck Go as my primary search engine because I’m aware of how much information about myself I voluntarily give to Google in so many other ways. I can’t even remember why I decided to use Chrome in the first place. The browser has become such a default for American internet users that I never even questioned it. Chrome has about 60% of the browser market, and Firefox has only 10%. But why should I continue to use the company’s browser, which acts as literally the window through which I experience much of the internet, when its incentives–to learn a lot about me so it can sell advertisements–don’t align with mine?

Firefox launched in 2004. It’s not a new option among internet privacy wonks. But I only remembered it existed recently while reporting on data privacy. Unlike Chrome, Firefox is run by Mozilla, a nonprofit organization that advocates for a “healthy” internet. Its mission is to help build an internet in an open-source manner that’s accessible to everyone–and where privacy and security are built in. Contrast that to Chrome’s privacy policy, which states that it stores your browsing data locally unless you are signed in to your Google account, which enables the browser to send that information back to Google. The policy also states that Chrome allows third-party websites to access your IP address and any information that site has tracked using cookies. If you care about privacy at all, you should ditch the browser that supports a company using data to sell advertisements and enabling other companies to track your online movements for one that does not use your data at all.

Though Mozilla itself is a nonprofit, Firefox is developed within a corporation owned by the nonprofit. This enables the Mozilla Corporation to collect revenue to support its development of Firefox and other internet services. Ironically, Mozilla supports its developers using revenue from Google, which pays the nonprofit to have Google Search as Firefox’s default search engine. That’s not its sole revenue: Mozilla also has other agreements with search engines around the world, like Baidu in China, to be the default search engine in particular locations. But because it relies on these agreements rather than gathering user data so it can sell advertisements, the Mozilla Corporation has a fundamentally different business model than Google. Internet service providers pay Mozilla, rather than Mozilla having to create revenue out of its user base. It’s more of a subscription model than a surveillance model, and users always have the choice to change their search engine to whichever they prefer.

I spoke to Madhava Enros, the senior director of Firefox UX, and Peter Doljanski, a product manager for Firefox, to learn more about how Mozilla’s browser builds privacy into its architecture. Core to their philosophy? Privacy and convenience don’t have to be mutually exclusive.

Instead, Firefox’s designers and developers try to make the best decision on behalf of the user, while always leaning toward privacy first. “We put the user first in terms of privacy,” Doljanski says. “We do not collect personally identifiable data, not what you do or what websites you go to.”

That’s not just lip service, like it often is when companies like Facebook claim that users are in control of their data. For instance, Firefox  protects you from being tracked by advertising networks across websites, which has the lovely side effect of making sites load faster. “As you move from website to website, advertising networks essentially follow you so they can see what you’re doing so they can serve you targeted advertisements,” Doljanski says. “Firefox is the only browser out of the box that prevents that from happening.” The browser’s Tracking Protection feature automatically blocks a list of common trackers, something you need a specific, third-party browser extension to do on Chrome.

The “out of the box” element of Firefox’s privacy protection is crucial. Chrome does give you many privacy controls, but the default for most of them is to allow Google to collect the greatest amount of information about you as possible. For instance, Google Chrome gives users the option to tell every website you go to not to track you, but it’s not automatically turned on. Firefox offers the same function to add a “Do Not Track” tag to every site you visit–but when I downloaded the browser, the default was set to “always.”

[Screenshot: Mozilla]

Because Chrome settings that don’t encourage privacy are the default, users are encouraged to leave them as they are from the get-go, and likely don’t understand what data Google vacuums up. Even if you do care, reading through Google Chrome’s 13,500-word privacy white paper, which uses a lot of technical jargon and obfuscates exactly what data the browser is tracking, isn’t helpful either. When I reached out to Google with questions about what data Chrome tracks, the company sent me that white paper but didn’t answer any of my specific questions.

One downside to using Firefox is that many browser extensions are built primarily for Chrome–my password manager luckily has a Firefox extension but it often causes the browser to crash. However, Mozilla also builds extensions you can use exclusively on Firefox. After the Facebook and Cambridge Analytica firestorm, Firefox released an extension called the Facebook Container, which allows you to browse Facebook or Instagram normally, but prevents Facebook from tracking where you went when you left the site–and thus stops the company from tracking you around the web and using that information to build out a more robust personal profile of you.

[Screenshot: Mozilla]

Firefox isn’t even Mozilla’s most private browser. The nonprofit also has a mobile-only browser called Firefox Focus that basically turns Firefox’s private browsing mode (akin to incognito browsing on Chrome, but with much less data leakage) into a full-fledged browser on its own. Privacy is built right into Focus’s UX: There’s a large “erase” button on every screen that lets you delete all of your history with a single tap. Focus and Firefox’s private browsing mode also have a feature called “origin referrer trimming,” where the browser automatically deletes the information about which site you’re coming from when you land on the next page. “The user doesn’t need to think about that,” Doljanski says. “It’s not heavily advertised, but it’s the little decisions we make along the way that meant the user doesn’t have to make the choice”–or even know what origin referrer trimming is in the first place.

Firefox Focus [Screenshot: Mozilla]

Many of these decisions, both in Firefox and in Focus, are to guard against what Enros calls the “uncanny valley” of internet browsing–when ads follow you around the internet for weeks. “I buy a toaster, and now it feels like the internet has decided I’m a toaster enthusiast and I want to hear about toasters for the rest of my life,” he says. “It’s not a scary thing. I’m not scared of toasters, but it’s in an uncanny valley in which I wonder what kinds of decisions they’re making about me.”

Ultimately, Firefox’s designers have the leeway to make these privacy-first decisions because Mozilla’s motivations are fundamentally different from Google’s. Mozilla is a nonprofit with a mission, and Google is a for-profit corporation with an advertising-based business model. To a large degree, Google’s business model relies on users giving up their data, making it incompatible with the kind of internet that Firefox is mission-bound to build. It comes back to money: While Firefox and Chrome ultimately perform the same service, the browsers’ developers approached their design in a radically different way because one organization has to serve a bottom line, and the other doesn’t.

That also means Firefox’s mission is aligned with its users. The browser is explicitly designed to help people like me navigate the convenience versus privacy conundrum. “To a great degree, people like us need solutions that aren’t going to detrimentally impact our convenience. This is where privacy is often difficult online,”  Doljanski says. “People say, go install this VPN, do this and do that, and add all these layers of complexity. The average user or even tech-savvy user that doesn’t have the time to do all these things will choose convenience over privacy. We try to make meaningful decisions on behalf of the user so we don’t need to put something else in front of them.”

When GDPR, the most sweeping privacy law in recent years, went into effect last week, we saw firsthand how much work companies were requiring users to do–just think of all those opt-in emails. Those emails are certainly a step toward raising people’s awareness about privacy, but I deleted almost all of them without reading them, and you probably did, too. Mozilla’s approach is to make the best decision for users’ privacy in the first place, without requiring so much effort on the users’ part.

Because who really spends any time in their privacy settings? Settings pages aren’t a good UX solution to providing clear information about how data is used, which is now required in Europe because of GDPR. “Control can’t mean the responsibility to scrutinize every possible option to keep yourself safe,” Enros says. “We assume a position to keep you safe, and then introducing more controls for experts.”

Firefox doesn’t always work better than Chrome–sometimes it’ll freeze on my older work computer, and I do need to clear my history more frequently so the browser doesn’t get too slow. But these are easy trade-offs to make, knowing that by using Firefox, my data is safe with me.