https://www.teslarati.com/tesla-model-3-evs-gas-cars-noisemaker-law-uk/

Tesla, other EVs required to have ‘traditional engine’ sound to meet new EU rule

Electric vehicles such as Teslas are known for being incredibly quiet. Without an internal combustion engine’s controlled explosions under the hood, electric cars are capable of operating in near-total silence. This, according to a new EU rule, will be changed soon.

new EU rule coming into force on Monday requires new electric vehicles to be equipped with a pedestrian noisemaker. The new regulations follows concerns that low-emission vehicles such as battery-electric cars are simply too quiet for the road, making them a risk for pedestrians, cyclists, and visually-impaired individuals (among others). According to the new ruling, a car’s Acoustic Vehicle Alert System (AVAS) must be engaged when reversing or traveling below 12 mph (19 km/h).

In its ruling, the EU noted that vehicles usually back up or travel at low speeds in areas that are near people, such as city streets or crosswalks, though the new regulations does allow drivers to deactivate their vehicles’ pedestrian noisemakers as necessary. By 2021, the EU noted that all electric vehicles — not just new models coming to the market — must be equipped with an AVAS.

Quite interestingly, the ruling mentioned that the noise-emitting devices would give EVs a sound that is similar to a “traditional engine,” according to a BBC report. This particular detail is notable, since making electric vehicles sound identical to conventional cars will likely result in some levels of noise pollution, something that EVs completely avoid.

Road noises, after all, are considered the second most harmful environmental stressor in Europe, surpassed only by air pollution. The European Federation for Transport and Environment (also known as Transport and Environment or T&E), for one, notes that vehicle noises are a “major cause, not only of hearing loss, but also of heart disease, learning problems in children and sleep disturbance.”

On the flip side, pedestrian noisemakers do make it far easier for the visually impaired to detect where vehicles are on the road. This was highlighted by Guide Dogs for the Blind, a charity that has complained about the absence of sounds from low-emission vehicles. Guide Dogs welcomed the new EU ruling, though the group noted that it would be better if EVs are required to produce sounds at all speeds.

The UK’s Minister of State at the Department for Transport, Michael Ellis, noted in a statement to the publication that the ruling came about as a result of the government wanting the “benefits of green transport to be felt by everyone,” while considering the safety needs of people at the same time. “This new requirement will give pedestrians added confidence when crossing the road,” he said.

While the consideration for the visually-impaired is quite admirable in the new ruling, it is quite interesting to see the EU regulations require electric vehicles to sound like traditional gas cars. EVs, after all, could have their own unique sound, as could be seen in the Jaguar I-PACE, the Audi e-tron, and even the prototype units of the Porsche Taycan. Even Tesla seems to be working on an AVAS, as hinted at by what appears to be a speaker grille on a Model 3’s underbody. Nevertheless, when Tesla rolls out its vehicles’ pedestrian noisemakers, one could be assured that it would be designed to minimize noise pollution, and it would most likely not simulate the sounds of a “traditional” engine.

https://www.macworld.com/article/3405842/target-is-selling-the-homepod-for-the-lowest-price-we-ve-ever-seen.html

Target is selling the HomePod for the lowest price we’ve ever seen

Target is knocking the HomePod’s price down to $200, and that price that sounds right on target.

homepod2 cropped
Apple

There has literally never been a better time to buy a HomePod. Recently, Apple released a big patch that made up for a number of the disappointments at the device’s launch, and today Target is knocking down the HomePod’s price to $199.99. That’s a full $50 down from the $250 Apple has been selling it for since April. It’s also a full $149 down from the $349 it sold for at launch, and it’s a price that’s much more in line with what you’re getting with Apple’s smart speaker.

If you’ve been on the fence about getting one, this is a good price. I’ll even go so far as to say that this is the price Apple probably should have sold it for at launch. Our own Michael Simon had a similar take earlier this year.

The HomePod now lets you make calls, set timers, add calendar events, and call of music and playlists straight just by using Siri. It also lets you search for songs based on lyrics, and it’s got a phenomenal sound system that makes your music sound intense regardless of where you’re standing in a room. As an apartment dweller, my only real complaint is that its subwoofer is a little too good, to the point that I worry that my neighbors can hear the HomePod thumpin’ two floors down even at a reasonable volume. (And sadly, there’s no real equalizer.)

If you want more information, be sure to check out our story above that details the changes, as well as our reviewfollowing its launch last year.

https://www.npr.org/2019/06/28/734034327/algorithmic-intelligence-has-gotten-so-smart-its-easy-to-forget-it-s-artificial

Algorithmic Intelligence Has Gotten So Smart, It’s Easy To Forget It’s Artificial

Computers use algorithms to do everything from adding up a column of figures to resizing a window.

Algorithms were around for a very long time before the public paid them any notice. The word itself is derived from the name of a 9th-century Persian mathematician, and the notion is simple enough: an algorithm is just any step-by-step procedure for accomplishing some task, from making the morning coffee to performing cardiac surgery.

Computers use algorithms for pretty much everything they do — adding up a column of figures, resizing a window, saving a file to disk. But all those things usually just happen the way they’re supposed to. We don’t have to think about what’s going on under the hood.

But algorithms got harder to ignore when they started taking over tasks that used to require human judgment — deciding which criminal defendants get bail, winnowing job applications, prioritizing stories in a news feed. All at once the media are full of disquieting headlines like “How to Manage our Algorithmic Overlords” and Is the Algorithmification of the Human Experience a Good Thing?”

Ordinary muggles may not know exactly how an algorithm works its magic, and a lot of people use the word just as a tech-inflected abracadabra. But we’re reminded every day how unreliable these algorithms can be. Ads for vitamin supplements show up in our mail feed, while wedding invitations are buried in the junk file. An appsends us off a crowded highway and lands us bumper-to-bumper in local streets.

OK — these are mostly just inconveniences. But they shake our confidence in the algorithms that are doing more important work. How can I trust Facebook’s algorithms to get hate speech right when they’ve got other algorithms telling advertisers that my interests include The Celebrity Apprentice, beauty pageants and the World Wrestling Entertainment Hall of Fame?

It’s hard to resist anthropomorphizing these algorithms — we endow them with insight and intellect, or with human frailties like bad taste and bias. Disney actually personified the algorithm literally in their 2018 animated movie Ralph Breaks the Internet, in the form of a character who has the title of Head Algorithm at a video-sharing site. She’s an imperious fashionista who recalls Meryl Streep in The Devil Wears Prada, as she sits at a desk swiping through cat videos and saying “no,” “no,” “yes.”

Tech companies tend to foster that anthropomorphic illusion when they tout their algorithms as artificial intelligence or just AI. To most people, that term evokes the efforts to create self-aware beings capable of reasoning and explaining themselves, like Commander Data of Star Trek or HAL in 2001: A Space Odyssey.

That was the aim of what computer scientists call “good old-fashioned” AI. But AI now connotes what’s called “second-wave AI” or “narrow AI.” That’s a very different project, focused on machine learning. The idea is to build systems that can mimic human behavior without having to understand it. You train an algorithm in something like the way psychologists have trained pigeons to distinguish pictures of Charlie Brown from pictures of Lucy. You give it a pile of data — posts that Facebook users have engaged with, comments that human reviewers have classified as toxic or benign, messages tagged as spam or not spam, and so on. The algorithm chews over thousands or millions of factorsuntil it can figure out for itself out how to tell the categories apart or predict which posts or videos somebody will click on. At that point you can set it loose in the world.

These algorithms can be quite adept at specific tasks. Take a very simple system I built with two colleagues some years ago that could sort out texts according to their genre. We trained an algorithm on a set of texts that were tagged as news articles, editorials, fiction, and so on, and it masticated their words and punctuation until it was pretty good at telling them apart — for instance, it figured out for itself that when a text contained an exclamation point or a question mark, it was more likely to be an editorial than news story. But it didn’t understand the texts it was processing or have any concept of the difference between an opinion and a news story, no more than those pigeons know who Charlie Brown and Lucy are.

The University of Toronto computer scientist Brian Cantwell Smith makes this point very crisply in a forthcoming book called, The Promise of Artificial Intelligence, arguing the systems have no concept of spam or porn or extremism or even of a game — rather, those are just elements of the narratives we tell about them.

 

These algorithms are really triumphs of intelligent artifice: ingenious systems that can mindlessly simulate human judgment. Sometimes they do that all too well, when they reproduce the errors in judgment they were trained on. If you train a credit rating algorithm on historical lending data that’s infected with racial or gender bias, the algorithm is going to inherit that bias, and it won’t be easy to tell. But they can also fail in alien ways that betray an unhuman weirdness. You think of the porn filters that block flesh-colored pictures of pigs and puddings, or those notorious image recognition algorithms that were identifying black faces as gorillas.

So it’s natural to be wary of our new algorithmic overlords. They’ve gotten so good at faking intelligent behavior that it’s easy to forget that there’s really nobody home.

https://www.cnx-software.com/2019/06/29/new-raspberry-pi-4-vli-firmware-lowers-temperature/

New Raspberry Pi 4 VLI Firmware Lowers Temperature by 3-5°C

The other day I tested Raspberry Pi 4 with an heatsink since previous multi-threaded benchmarks clearly made the board throttle when runs those without any cooling solution.

The guys at the Raspberry Pi Foundation somehow noticed my post, and I received an email from Eben Upton explaining a new Raspberry Pi 4 VLI firmware had “some thermal optimizations that are not installed by default on early production units.” I did not understand VLI at first, but eventually understand this referred to the firmware for VIA VL805 PCIe USB 3.0 controller on the board.

The Raspberry Pi Foundation provided me with a test version of the firmware, which they’ll release in the next few days, or weeks after testing is completed.

Now if you’re going to test a platform that will throttle due to overheating, it’s very important you do so at constant room temperature. I work in a office where the air conditioner is set to 28°C, so that’s about the temperature I have here.

Before going on with the test I’ve installed rpi-monitor to have nice CPU temperature charts later on:

Let’s run sbc-bench without heatsink with old VLI firmware (version 00013701):

7-zip never completed, as it was killed three times due to running out of memory. Maybe it happened because with a 1GB RPi 4, we’re right at the limit. Enabling ZRAM may help.

But we do have our temperature data for the full benchmark. We started at 67°C idle, and the spike to over 80°C (11:26 to 11:30) is exactly during 7-zip multi-threaded benchmark:

Raspberry Pi 4 SBC Bench Temperature
Tempeature during sbc-bench – Click to Enlarge

Now let’s install the firmware in a terminal in the Raspberry Pi 4:

For reference the tool can also be used to backup the firmware, and write to any location in the EEPROM:

If you mess up you’ll lose USB connectivity, but user could have to ssh or serial into the device and re-run the tool to flash an older firmware to recover. It’s unclear whether early adapter will have to update the firmware manually, or it will be done automatically as part of the update process. That’s one of the reason I can’t share the files now.

Raspberry Pi 4 VLI Firmware Idle Temperature
Click to Enlarge

Nevertheless it does seem to have some effect on idle temperature. Previously I got just under 65°C, but now I get just above 61°C once it stabilizes, so the new firmware does lower the temperature by 3 to 4°C thanks to lower power consumption. Sadly, I can’t measure the latter as my power meter is dead.

Now let’s run sbc-bench again without heatsink and the new VLI firmware (version 0137a8):

This time all three runs for 7-zip could complete for some reasons, and while throttling still did occur, it did to a lesser extent, and the temperature was clearly lower during to single thread benchmarks (~70°C vs 75°C with old firmware).

Raspberry Pi 4 VLI-805 Firmware Temperature
Click to Enlarge

For reference, 7-zip benchmark score with heatsink averaged 5,397 points, without heatsink + old VLI firmware 4,423 points, but the “no heatsink with new VLI firmware” results are much better at 5,298 points. You’ll also note the first two runs were as good as the results with heatsink, but the last one dropped to just under 5,000 points, so for full load under and extended period of time a heatsink is still recommended for full performance. It’s still impressive what a new firmware can achieve.

You may wonder what the Raspberry Pi Foundation has changed. Thomas Kaiser may have found the reason in advance, as now ASPM (Active-State Power Management) seems to be enabled:

This was not the case with the old VLI firmware. The full lspci output can be found here.

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.

https://hackaday.com/2019/06/28/power-to-the-pi-4-some-chargers-may-not-make-the-grade/

POWER TO THE PI 4: SOME CHARGERS MAY NOT MAKE THE GRADE

The Raspberry Pi 4 has been in the hands of consumers for a few days now, and while everyone seems happy with their new boards there are some reports of certain USB-C power supplies not powering them. It has been speculated that the cause may lie in the use of pulldown resistors on the configuration channel (CC) lines behind the USB-C socket on the Pi, with speculation that one may be used while two should be required. Supplies named include some Apple MacBook chargers, and there is a suggestion is that the Pi may not be the only device these chargers fail to perform for.

Is this something you should be worried about? Almost certainly not. The Pi folks have tested their product with a wide variety of chargers but it is inevitable that they would be unable to catch every possible one. If your charger is affected, try another one.

What it does illustrate is the difficulties faced by anybody in bringing a new electronic product to market, no matter how large or small they are as an organisation. It’s near-impossible to test for every possible use case, indeed it’s something that has happened to previous Pi models. You may remember that the Raspberry Pi 2 could be reset by a camera flash or if you have a very long memory, that the earliest boards had an unseemly fight between two 1.8 V lines that led to a hot USB chip, and neither of those minor quirks dented their board’s ability to get the job done.

Mistakes happen. Making the change to USB-C from the relative simplicity of micro-USB is a big step for all concerned, and it would be a surprise were it to pass entirely without incident. We’re sure that in time there will be a revised Pi 4, and we’d be interested to note what they do in this corner of it.

https://www.cnet.com/how-to/create-a-google-home-routine-for-every-part-of-your-day/

Create a Google Home routine for every part of your day

Whether you’re just waking up or on your way home from work, Google can make it all easier.

BY 

google-home-mini-5
There are routines for almost every part of your day.

Chris Monroe/CNET

When you open the Google Home app, tap Routines to get started with setting them up. If you tap any of the individual routines (Bedtime, Good Morning, I’m home orLeaving home) Google Assistant will open on your phone and tell you a few things you can set up on each routine. For example, the Assistant can set your alarm, tell you what’s on your calendar tomorrow and play some relaxing nature sounds for your bedtime routine.

The point of a Google Routine is to save you time. One phrase will activate your routine and all the tasks associated with it, so you don’t have to do six things when you wake up, go to bed, leave the house or get home.

Set a routine

Routines > Manage Routines Choose the routine you want to customize

  • Choose a command to trigger your routine (you can also set a custom command)
  • Choose the order that you want your Assistant to accomplish tasks in
  • Tapping the gear icon next to most of these options will let you customize further
  • Tapping add action will let you add extra tasks
  • Tap the check mark icon to save your preferences
Google/GIF by CNET

Good Morning Routine

Routines > Manage Routines > Good Morning

The phrases you can say to start your morning routine include “Good morning,” “Tell me about my day” or “I’m up,” but you have to preface it with “OK, Google.”

Your Assistant will greet you and start your routine in the order you customized it, like turning on the lights (make sure your light switch is on), then telling you what time it is and more. Google can also tell you about the day’s weather forecast, what your work commute looks like and what’s on your calendar.

If you set it up, Google will start a music playlist, the news, the radio, a podcast and more. You can only pick one, so there’ll be no hodgepodge of Metallica and NPR going at the same time.

Bedtime routine

Routines > Manage Routines > Bedtime

The phrases you can say to start your bedtime routine include “Bedtime,” “Goodnight” or “Time to hit the hay,” but again you have to preface it with “OK, Google.”

As with your Good Morning Routine, your Assistant will greet you and start your routine in the order you customized it — turning off the lights (make sure your light switch is on), setting an alarm for tomorrow and more. Google can also tell you about tomorrow’s weather forecast and what’s on your calendar.

Why not also end the day with a music playlist or sleep sounds (i.e. rain, ocean, thunderstorms, a crackling fireplace, white noise, etc.) and set them on a timer so they don’t play all night.

I’m home routine

Routines > Manage Routines > I’m home

Say “I’m home” or “I’m back” and Google can start the I’m Home routine that you’ve set up. The more Google gadgets you have, the more stuff you can do with it. Your routine can adjust lights, the thermostat, remind you of what you need to do at home (wash the dishes, walk the dog, etc.) and then play your music or the news like during your Good Morning routine.

Google can also “broadcast” that you’re home (or any message that you want), just tap Broadcast I’m Home. When I tested this feature, a fanfare song started playing in my house and Google announced that I was home. It then gave my husband the chance to reply with a message. Depending on how many people are in your house and where the devices are, you could have some fun (or be very annoying!).

Leaving home routine

Routines > Manage Routines >  Leaving home

The phrases you can say to start your bedtime routine include “I’m leaving,” or “I’m heading out,” but remember to preface it with “OK, Google.” You mostly need more gadgets for this one to be useful unless you just want to hear Google tell you to have a great day.

The Leaving Home routine can shut off your lights, adjust your Nest thermostat, lock your doors and arm your security system.

Watch this: All of the news out of Googles I/O keynote
 1:33

Commuting to Work routine

Routines > Manage Routines >  Commuting to work

The commuting to work routine is similar to the Leaving Home routine, just more specific. When you tap the gear icon, Google lets you input your work address. Say “Let’s go to work” and Assistant will also tell you about the weather, commute info and what you have on your calendar. Before you leave, it can adjust the thermostat, shut off the lights and play music, a podcast or the news for your drive. It’ll play on your phone, but if you have Bluetooth on, it will play in your car.

Commuting home routine

Routines > Manage Routines >  Commuting home

The commuting home routine is similar to the commuting to work routine, but it has a few more options. If you added your work address to Commuting to Work, it’ll be on your Commuting Home routine, so you’ll know the traffic home.

Say “Let’s go home” and Assistant can help you read and send texts (hands-free of course), adjust your lights and thermostat at home, and broadcast to your family that you’re on your way home. And of course, what’s a commute home without music, news or a podcast? Google can do that too.

MENTIONED ABOVE
Google Home
$69
SEE IT

https://www.theregister.co.uk/2019/06/28/ai_3d_simulations_universe/

That this AI can simulate universes in 30ms is not the scary part. It’s that its creators don’t know why it works so well

Saves time, only a little accuracy lost, unexpectedly understands dark matter

galaxy_filament

Neural networks can build 3D simulations of the universe in milliseconds, compared to days or weeks when using traditional supercomputing methods, according to new research.

To study how stuff interacts in space, scientists typically build computational models to simulate the cosmos. One simulation approach – known as N-body simulation – can be used to recreate phenomena ranging from smaller events, such as the collapse of molecular clouds into stars, to a giant system, such as the whole universe, obviously to varying levels of accuracy and resolution.

The individual interactions between each of the millions or billions of particles or entities in these models have to be repeatedly calculated to track their motion over time. This requires heavy amounts of compute power, and it takes several days or weeks for a supercomputer to return the results.

For impatient boffins, there’s now some good news. A group of physicists, led by eggheads at the Center for Computational Astrophysics at the Flatiron Institute in New York, USA, decided to see if neural networks could speed things up a bit.

They successfully built a deep neural network dubbed the deep density displacement model, or D3M, according to a paper published this week in the Proceedings of the National Academy of Sciences of the United States of America. The model creates simulations of the universe after being given a set of displacement vectors for the particles in the system: these vectors, simply put, define the direction and distance the particles should be heading as the universe expands. It then turns these vectors into images that show how the particles of matter actually move, under the effects of gravity, and clump together to form webs of galaxy filaments over time.

Videos of driving next to rendered versions

Like the Architect in the Matrix… but not: Nvidia trains neural network to generate cityscapes

READ MORE

A total of 8,000 universe simulations, each containing 32,728 particles spread over a virtual space spanning 600 million light years, were produced by traditional software and used to train D3M. In other words, it was taught how particles interact from thousands of traditional universe simulations, so that, during inference, for a given arbitrary input set of particles and displacement vectors, it can use its gained intuition to produce an output. It wasn’t told the physics equations behind the interactions, which the traditional non-AI simulators are programmed to calculate; instead it gains an intuition on what’s expected from an input set.

The accuracy of the neural network is judged by how similar its outputs were to the ones created by two more traditional N-body simulation systems, FastPM and 2LPT, when all three are given the same inputs. When D3M was tasked with producing 1,000 simulations from 1,000 sets of input data, it had a relative error of 2.8 per cent compared to FastPM, and a 9.3 per cent compared to 2LPT for the same inputs. That’s not too bad, considering it takes the model just 30 milliseconds to crank out a simulation. Not only does that save time, but it’s also cheaper too since less compute power is needed.

To their surprise, the researchers also noticed that D3M seemed to be able to produce simulations of the universe from conditions that weren’t specifically included in the training data. During inference tests, the team tweaked input variables such as the amount of dark matter in the virtual universes, and the model still managed to spit out accurate simulations despite not being specifically trained for these changes.

“It’s like teaching image recognition software with lots of pictures of cats and dogs, but then it’s able to recognize elephants,” said Shirley Ho, first author of the paper and a group leader at the Flatiron Institute. “Nobody knows how it does this, and it’s a great mystery to be solved.

“We can be an interesting playground for a machine learner to use to see why this model extrapolates so well, why it extrapolates to elephants instead of just recognizing cats and dogs. It’s a two-way street between science and deep learning.”

The source code for the neural networks can be found here.

https://medicalxpress.com/news/2019-06-social-engagement-amyloid-cognitive-decline.html

Study connects low social engagement to amyloid levels and cognitive decline

social engagement
Credit: CC0 Public Domain

Social relationships are essential to aging well; research has shown an association between lack of social engagement and increased risk of dementia. A new study by investigators from Brigham and Women’s Hospital found that higher brain amyloid-β in combination with lower social engagement in elderly men and women was associated with greater cognitive decline over three years. The results of the study were published last month in the American Journal of Geriatric Psychiatry.

“Social engagement and cognitive function are related to one another and appear to decline together,” said senior author Nancy Donovan, MD, chief of the Division of Geriatric Psychiatry at the Brigham. “This means that  may be an important marker of resilience or vulnerability in older adults at risk of cognitive impairment.”

The investigators sampled 217 men and women enrolled in the Harvard Aging Brain Study, a longitudinal observational study looking for early neurobiological and clinical signs of Alzheimer’s disease. The participants, aged 63-89, were cognitively normal, but some individuals showed high levels of amyloid-β protein, a pathologic hallmark of Alzheimer’s disease detected with neuroimaging techniques.

The investigators used standard questionnaires and examinations to assess participants’ social engagement (including activities such as spending time with friends and family and doing ) and cognitive performance at baseline and three years later.

Social engagement was particularly relevant to cognition in participants with evidence of Alzheimer’s disease brain changes. The researchers report that, among cognitively normal  with high levels of amyloid-β, those who had lower social engagement at baseline showed steeper cognitive decline than those who were more socially engaged. This association was not observed in those with low amyloid-β.

Donovan and her team used a standard measure of social engagement that did not capture all the intricacies of digital communication or the qualitative aspects of relationships. They reported that a more contemporary and comprehensive assessment of social engagement could be a valuable outcome measure in future clinical trials of Alzheimer’s disease.

The team cited that future studies with follow-up periods longer than three years may further gauge cognitive decline over time and help untangle the complex mechanisms of Alzheimer’s .

“We want to understand the breadth of this issue in older people and how to intervene to protect high-risk individuals and preserve their health and well-being,” said Donovan.


Explore further

Anxiety: An early indicator of Alzheimer’s disease?


More information: Kelsey D. Biddle et al, Social Engagement and Amyloid-β-Related Cognitive Decline in Cognitively Normal Older Adults, The American Journal of Geriatric Psychiatry (2019).DOI: 10.1016/j.jagp.2019.05.005

https://www.zdnet.com/article/mit-were-building-on-julia-programming-language-to-open-up-ai-coding-to-novices/

MIT: We’re building on Julia programming language to open up AI coding to novices

MIT claims a win with probabilistic-programming system Gen in democratizing AI and spreading innovation for all.

 

MIT, where the popular Julia language was born, has created a probabilistic-programming system called ‘Gen’, which it says will make it easier for newbies to get started with computer vision, robotics, and statistics.

Gen is part of Julia, which MIT researchers debuted Julia in 2012 and over the past year has become one of the world’s most popular languages, currently sitting in 44th place on the Tiobe programming language index, just behind official Android language Kotlin, Microsoft’s JavaScript superset TypeScript, and Mozilla-created Rust.

According to MIT, Gen’s creators “incorporate several custom modeling languages into Julia” to create the new AI programming system, which allows users to create AI models and algorithms “without having to deal with equations or manually write high-performance code”.

But the system can also be used for more complex tasks, such as prediction, which may be of use to more technically competent researchers, according to MIT.

The name ‘Gen’ comes from the system’s purpose to fill a gap in “general-purpose” probabilistic programming, according a paper by MIT researchers.

“Existing systems are impractical for general-purpose use,” they write.

“Some systems provide restricted modeling languages that are only suitable for specific problem domains. Others provide ‘universal’ modeling languages that can represent any model, but support only a limited set of inference algorithms that converge prohibitively slowly.”

The system allows coders to create a program that, for example, can infer 3-D body poses and therefore simplify computer vision tasks for use in self-driving cars, gesture-based computing, and augmented reality.

It combines graphics rendering, deep-learning, and types of probability simulations in a way that improves a probabilistic programming system that MIT developed in 2015 after being granted funds from a 2013 Defense Advanced Research Projects Agency (DARPA) AI program.

The idea behind the DARPA program was to lower the barrier to building machine-learning algorithms for things like autonomous systems.

“One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math,” says lead author of the paper, Marco Cusumano-Towner, a PhD student in the Department of Electrical Engineering and Computer Science.

“We also want to increase productivity, which means making it easier for experts to rapidly iterate and prototype their AI systems.”

In the same way that Microsoft claims it is ‘democratizing AI’, the MIT researchers are aiming to enable data science for everyone.

MIT also claims to be one-upping Google’s popular AI framework, TensorFlow, which helps users create algorithms “without doing much math” and relies on a Python language API.

MIT says TensorFlow is “narrowly focused on deep-learning models” and might not be fully delivering on AI’s potential.

MORE ON MIT’S JULIA AND PROGRAMMING LANGUAGES

https://www.cnet.com/news/startup-packs-all-16gb-wikipedia-onto-dna-strands-demonstrate-new-storage-tech/

Startup packs all 16GB of Wikipedia onto DNA strands to demonstrate new storage tech

Biological molecules will last a lot longer than the latest computer storage technology, Catalog believes.

BY 

Startup Catalog has stored all 16GB of English-language Wikipedia on DNA contained in this vial.
Startup Catalog has stored all 16GB of English-language Wikipedia on DNA contained in this vial.

Catalog

Computer storage technology has moved from wires with magnets to hard disks to 3D stacks of memory chips. But the next storage technology might use an approach as old as life on earth: DNA. Startup Catalog announced Friday it’s crammed all of the text of Wikipedia’s English-language version onto the same genetic molecules our own bodies use.

It accomplished the feat with its first DNA writer, a machine that would fit easily in your house if you first got rid of your refrigerator, oven and some counter space. And although it’s not likely to push aside your phone’s flash memory chips anytime soon, the company believes it’s useful already to some customers who need to archive data.

DNA strands are tiny and tricky to manage, but the biological molecules can store other data than the genes that govern how a cell becomes a pea plant or chimpanzee. Catalog uses prefabricated synthetic DNA strands that are shorter than human DNA, but uses a lot more of them so it can store much more data.

Relying on DNA instead of the latest high-tech miniaturization might sound like a step backward. But DNA is compact, chemically stable — and given that it’s the foundation of the Earth’s biology, it’s arguably not as likely to become as obsolete as the spinning magnetized platters of hard drives or CDs that are disappearing today the way floppy drives already vanished.

Who’s in the market for this kind of storage? Catalog has one partner to announce, the Arch Mission Foundation that’s trying to store human knowledge not just on Earth but even elsewhere in the solar system — like on Elon Musk’s Tesla Roadster that SpaceX launched into orbit. Beyond that, Catalog isn’t ready to say who other customers might be or if it’ll charge for its DNA writing service.

Catalog's DNA writing machine can write data at a rate of 4 megabits per second, but the company hopes to make it at least a thousand times faster.
Catalog’s DNA writing machine can write data at a rate of 4 megabits per second, but the company hopes to make it at least a thousand times faster.

Catalog

“We have discussions underway with government agencies, major international science projects that generate huge amounts of test data, major firms in oil and gas, media and entertainment, finance, and other industries,” the company said in a statement.

Catalog, based in Boston, has its own device to write data that can record 4 megabits per second right in DNA. Optimizations should triple that rate, letting people record 125 gigabytes in a single day — about as much as a higher-end phone can store.

Conventional DNA sequencing products already for sale in the biotechnology market read the DNA data. “We think this whole new use case for sequencing technology will help [drive] down cost quite a bit,” Catalog said, arguing that computing business is a potentially much larger market.

Two MIT graduate students, Chief Executive Hyunjun Park and Chief Technology Innovation Officer Nathaniel Roquet, founded Catalog in 2016.

Catalog uses an addressing system that means customers can use large data sets. And even though DNA stores data in long sequences, Catalog can read information stored anywhere using molecular probes. In other words, it’s a form of random-access memory like a hard drive, not sequential access like the spools of magnetic tape you might remember from the heyday of mainframe computers a half century ago.

Although DNA data can be disrupted by cosmic rays, Catalog argues that it’s a more stable medium than the alternatives. After all, we’ve got DNA from animals that went extinct thousands of years ago. How much do you want to bet that USB thumb drive in your desk drawer will be still useful even 25 years from now?