https://www.timesnownews.com/health/article/you-look-familiar-humans-recognise-5000-faces-says-study/296998

You look familiar: Humans recognise 5,000 faces, says study

Through most of history humans lived in small groups of a hundred or so individuals, a pattern that has changed drastically in recent centuries.

You look familiar: humans recognise 5,000 faces, says study

Humans recognise 5,000 faces, says study  |  Photo Credit: Thinkstock

From family and friends to strangers on the subway and public figures on 24-hour news cycles, humans recognise an astonishing 5,000 faces, scientists said on Wednesday in the first study of its kind. Through most of history humans lived in small groups of a hundred or so individuals, a pattern that has changed drastically in recent centuries.

A study by scientists at Britain’s University of York found that our facial recognition abilities allow us to process the thousands of faces we encounter in busy social environments, on our smartphones and our television screens every day.

“In everyday life, we are used to identifying friends, colleagues, and celebrities, and many other people by their faces,” Rob Jenkins, from York’s Department of Psychology, told AFP.

“But no one has established how many faces people actually know.”

For the study, published in the journal Proceedings of the Royal Society B, Jenkins and his team asked participants to write down as many faces they could remember from their personal lives. The volunteers were then asked to do the same with people they recognised but did not know personally.

They were also shown thousands of images of famous people — two photos of each to ensure consistency — and asked which ones they recognised. The team found an enormous range of the number of faces each participant could recall, from roughly 1,000-10,000.

“We found that people know around 5,000 faces on average,” Jenkins said.

“It seems that whatever mental apparatus allows us to differentiate dozens of people also allows us to differentiate thousands of people.”

Never forget a face

The team said it believes this figure — the first ever baseline of human “facial vocabulary”, could aid the development of facial recognition software increasingly used at airports and criminal investigations.

It may also help scientists better understand cases of mistaken identity.

“Psychological research in humans has revealed important differences between unfamiliar and familiar face recognition,” said Jenkins.

“Unfamiliar faces are often misidentified. Familiar faces are identified very reliably, but we don’t know exactly how.”

While the team said it was focused on how many faces humans actually know, they said it might be possible for some people to continue learning to recognise an unlimited number of faces, given enough practice.

They pointed out that the brain has an almost limitless capacity to memorise words and languages — the limits on these instead come from study time and motivation.

The range of faces recognised by participants went far beyond what may have been evolutionarily useful for thousands of years humans would likely only have met a few dozen people throughout their lives.

Jenkins said it was not clear why we developed the ability to distinguish between thousands of faces in the crowd.

“This could be another case of ‘overkill’ that is sometimes seen in nature,” he said.

 

https://ca.sports.yahoo.com/news/noise-pollution-worse-ever-can-avoid-damaging-health-113828163.html

Noise pollution is worse than ever – here is how you can avoid it damaging your health

Francesca Specter

Yahoo Style UK deputy editor

Noise pollution is a very real threat to your overall health – and it’s getting worse, according to a new report from the World Health Organisation.

The publication, released today, aim to tackle the serious implications noise pollution can have for one in five of us in Europe.

“Noise pollution in our towns and cities is increasing, blighting the lives of many European citizens,” said Dr Zsuzsanna Jakab, the WHO’s regional director for Europe. More than a nuisance, excessive noise is a health risk.”

Exposure to excessive noise can lead to a number of conditions , cognitive impairment in children, sleep disturbance, cardiovascular disease and tinnitus and annoyance, the report explains.

Here’s how you can reduce your own exposure to noise, based on NHS guidelines for hearing:

1. Avoid loud noises

The best way to avoid noise-induced hearing loss is to keep away from loud noise as much as you can,” the website advises.

A quick test is, if you have to raise your voice to talk to others, it’s probably too loud. Ditto if your ears hurt, or if you have ringing in your ears afterwards.

2. Take care when listening to music

Listening to loud music through earphones and headphones is one of the biggest dangers to your hearing,” says the NHS. Try purchasing a noise-cancelling pair, or maintaining the volume below 60% of its maximum capacity, the guidelines recommend.

3. Protect your hearing

Try to wear earplugs when you attend a nightclub or concert, to protect your ears from excessive noise. Alternatively, move away from loudspeakers and try to take a break from the noise every 15 minutes.

4 Take precautions at work

“Your employer is obliged to make changes to reduce your exposure to loud noise,” explains the website – so make sure you are provided with hearing protection such as ear muffs or earplugs if you need it, and be sure to wear it.

5 Get your hearing tested

If you are worried you are losing your hearing, get a test. The NHS says: “The earlier hearing loss is picked up, the earlier something can be done about it.”

https://www.cnbc.com/2018/10/08/mit-develops-a-chip-to-help-computers-work-more-like-human-brains-.html

MIT researchers develop a chip design to take us closer to computers that work like human brains
PUBLISHED MON, OCT 8 2018 • 8:26 AM EDT

Scientists at MIT are developing brains-on-a-chip for neuromorphic computing.
It would allow processing facts, patterns and learning at lightning speed and could fast-forward the development of humanoids and autonomous driving technology.
Last year the market for chips that enable machine learning was approximately worth $4.5 billion, according to Intersect360.
H/O: chip designers
From left, MIT researchers Scott H. Tan, Jeehwan Kim and Shinhyun have unveiled a neuromorphic chip design that could represent the next leap for AI technology. The secret: a design that creates an artificial synapse for “brain on a chip” hardware.
Kuan Qiao
While the pace of machine learning has quickened over the last decade, the underlying hardware enabling machine-learning tasks hasn’t changed much: racks of traditional processing chips, such as computer processing units (CPUs) and graphics processing units (GPUs) combined in large data centers.

But on the cutting edge of processing is an area called neuromorphic computing, which seeks to make computer chips work more like the human brain — so they are able to process multiple facts, patterns and learning tasks at lightning speed. Earlier this year, researchers at the Massachusetts Institute of Technology unveiled a revolutionary neuromorphic chip design that could represent the next leap for AI technology.

The secret: a design that creates an artificial synapse for “brain on a chip” hardware. Today’s digital chips make computations based on binary, on/off signaling. Neuromorphic chips instead work in an analog fashion, exchanging bursts of electric signals at varying intensities, much like the neurons in the brain. This is a breakthrough, given that there are “more than 100 trillion synapses that mediate neuron signaling in the brain,” according to the MIT researchers.

More from Business of Design:
Soon you may be able to change your car’s interior at the touch of a button
The secret trigger that makes you reach for your favorite bottle of wine
These tiny homes offer a breathtaking retreat for nature lovers who want to escape the modern world

The MIT research, published in the journal Nature Materials in January, demonstrated a new design for a neuromorphic chip built from silicon germanium. Think of a window screen, and you have an approximation of what this chip looked like at the microscopic level. The structure made for pathways that allowed the researchers to precisely control the intensity of electric current. In one simulation, the MIT team found its chip could represent samples of human handwriting with 95 percent accuracy.

“Supercomputer-based artificial neural network operation is very precise and very efficient. However, it consumes a lot of power and requires a large footprint,” said lead researcher Jeehwan Kim, professor and principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

Eventually, such a chip design could lead to processors capable of carrying out machine learning tasks with dramatically lower energy demands. It could fast-forward the development of humanoids and autonomous driving technology.

Another plus is cost-saving and improvements in portability. It’s thought that small neuromorphic chips would consume much less power — perhaps even up to 1,000 times less — while efficiently processing millions of computations simultaneously, something currently possible only with large banks of supercomputers.

“That’s exactly what people are envisioning: a larger category of problems can be done on a single chip, and over time that migrates into something very portable,” said Addison Snell, CEO of Intersect360 Research, an industry analyst that tracks high-performance computing.

The current market for chips that enable machine learning is quite large. Last year, according to Intersect360, the market was approximately worth $4.5 billion. Neuromorphic chips represent a tiny sliver. According to Deloitte, fewer than 10,000 neuromorphic chips will probably be sold this year, whereas it expects more than 500,000 GPUs will be sold in 2018.

GPUs were developed initially by Nvidia in the 1990s for computer-based gaming. Eventually, researchers discovered they were highly effective at supporting machine-learning tasks via artificial neural networks, which are run on supercomputers and allow for the training and inference tasks that make up the main segments of any AI workflow. (If you want to build an image-recognition system that knows what is and what isn’t a tiger, you first feed the network millions of images labeled by humans as “tigers” or “not tigers,” which trains the computer algorithm. Next time the system is shown a photo of a tiger, it will be able to infer that the image is indeed a tiger.)

The evolution of machine learning
But in recent years small start-ups and big companies alike have been modifying their chip architecture to meet the demands of new artificial intelligence workloads, including autonomous driving and speech recognition. Two years ago, according to Deloitte, almost all the machine-learning tasks that involved artificial neural networks made use of large banks of GPUs and CPUs. This year new chip designs, such as FPGAs (field programmable gate arrays) and ASICs (application-specific integrated circuits), make up a larger share of machine-learning chips in data centers.

“These new kinds of chips should increase dramatically the use of machine learning, enabling applications to consume less power and at the same time become more responsive, flexible and capable,” according to a Deloitte market analysis published this year.

Neuromorphic chips represent the next level, especially as chip architecture based on the premise of shrinking transistors has begun to slow down. Although neuromorphic computing has been around since the 1980s, it’s still considered an emerging field — albeit one that has garnered more attention from researchers and tech companies over the last decade.

“The power and performance of neuromorphic computing is far superior to any incremental solution we can expect on any platform,” said Dharmendra S. Modha, IBM chief scientist for brain-inspired computing.

H/O: IBM’s TrueNorth chips
A 64-chip array of IBM’s TrueNorth chips, which represents 64 million neurons.
IBM
Modha initiated IBM’s own project into neuromorphic chip design back in 2004. Funded in part by the Defense Advanced Research Projects Agency, the years-long effort by IBM researchers resulted in TrueNorth, a neuromorphic chip the size of a postage stamp that draws just 70 milliwatts of power, or the same amount required by a hearing aid.

“We don’t envision that neuromorphic computing will replace traditional computing, but I believe it will be the key enabling technology for self-driving cars and for robotics,” Modha said.

For computing at the edge — like the reams of data a self-driving car must process in real time to prevent crashing — small, portable neuromorphic chips would represent a boon. Indeed, the ultimate end game is taking a deep neural network and embedding it onto a single chip. Current neuromorphic technology is far from that, however.

These new kinds of chips should increase dramatically the use of machine learning.
Deloitte marketing report
The MIT research spearheaded by Kim took about three years and still continues, thanks to a $125,000 grant from the National Science Foundation.

“People have been pursuing neuromorphic computing for decades. We’re getting closer to where such chips are possible,” said Intersect360′s Snell. “But in the near term the market will be more geared toward what can be done with traditional processing elements.”

https://www.timesnownews.com/health/article/new-machine-learning-technology-to-predict-human-blood-pressure-study/295534

New machine learning technology to predict human blood pressure: Study

Using machine learning and the data from existing wearable devices, they developed an algorithm to predict the users’ blood pressure and show which particular health behaviours affected it most.

Representational image Representational Image | Photo Credit: Thinkstock
New York: Researchers, including one of an Indian-origin, have developed a wearable off-the-shelf and machine learning technology that can predict an individual’s blood pressure and provide personalised recommendations to lower it. When doctors tell their patients to make a lot of significant lifestyle changes – exercise more, sleep better, lower their salt intake etc. – it can be overwhelming, and compliance is not very high, Sujit Dey, Professor, Department of Electrical and Computer Engineering at the University of California in the US, said in a statement.

“What if we could pinpoint the one health behaviour that most impacts an individual’s blood pressure, and have them focus on that one goal instead,” Dey said. The study affirmed the importance of personalised data over generalised information as the former was more effective. The team collected sleep, exercise and blood pressure data from eight patients over 90 days. Using machine learning and the data from existing wearable devices, they developed an algorithm to predict the users’ blood pressure and show which particular health behaviours affected it most.

“This research shows that using wireless wearables and other devices to collect and analyse personal data can help transition patients from reactive to continuous care,” Dey said. “Instead of saying ‘My blood pressure is high, therefore I’ll go to the doctor to get medicine’, giving patients and doctors access to this type of system can allow them to manage their symptoms on a continuous basis,” he noted.

Blood Pressure Health Health news Diseases and conditions
NEXT STORY

Grape compound can help protect against lung cancer: StudyHealth
Updated Oct 06, 2018 | 18:38 IST | IANS
Researchers have found that a molecule, resveratrol which is found in grape skin, seeds and red wine can be an ultimate source to protect against lung cancer.

Lung Cancer Studies reveal that grape compound can help protect against lung cancer: (Representational Image) | Photo Credit: Thinkstock
London: Researchers have found that a molecule — resveratrol — found in grape skin, seeds and red wine can protect against lung cancer. Lung cancer is the deadliest form of the disease in the world and 80 per cent of deaths are related to smoking. In addition to tobacco control, effective chemo-prevention strategies are therefore needed. In experiments in mice, the researchers from the University of Geneva (UNIGE) prevented lung cancer induced by a carcinogen found in cigarette smoke by using resveratrol.

“We observed a 45 per cent decrease in tumour load per mouse in the treated mice. They developed fewer tumours and of smaller size than untreated mice,” said Muriel Cuendet, associate professor at the varsity. The team conducted their 26-week study on four groups of mice. The first one — the control — received neither carcinogen nor resveratrol treatment. The second received only the carcinogen. The third received both the carcinogen and the treatment, whereas the fourth received only the treatment.

When comparing the two groups that were not exposed to a carcinogen, 63 per cent of the mice treated did not develop cancer, compared to only 12.5 per cent of the untreated mice. “Resveratrol could, therefore, play a preventive role against lung cancer,” Cuendet added. This formulation is applicable to humans, the researchers noted.

However, when ingested, resveratrol did not prevent lung cancer as it is metabolised and eliminated within minutes. It does not have time to reach the lungs. Conversely, when the molecule was administered through the nasal route, it as found to be much effective and allows the compound to reach the lungs.

The resveratrol concentration obtained in the lungs after nasal administration of the formulation was 22 times higher than when taken orally, the researchers said.

https://mobilesyrup.com/2018/10/07/laserlight-core-bike-safety-sticky-or-not/

The Laserlight Core aims to make it safer to cycle on the roads [Sticky or Not]

By Bradly ShankarOCT 7, 20182:26 PM EDT

Bicycles are a great method of transportation that offer both eco-friendly and exercise benefits. They’re not always safe, however. According to an April 2017 Statistics Canada report, there were a total of 1,408 deaths between 1994 and 2012. In other words, this averages out to 74 bike-related accidents a year, in addition to the 7,500 average cases of serious injuries recorded by CAA. This has led to a discussion on whether bike safety is being taken seriously. With that in mind, U.K.-based startup Beryl has designed the Laserlight Core, a Kickstarter-backed projection bike light for safer cycling.

As a complete redesign from the company’s previous Laserlight Blaze, the Laserlight Core is designed to eliminate drivers’ blindspots by projecting an image of a cyclist six metres in front of the bike. Citing data from the Transport Research Lab, Beryl says that laser technology has been proven to make cyclists up 32 percent more visible to drivers. A Day Flash mode means that the Laserlight Core’s 400 lumens light can even be visible in during day rides. Riders will be able to choose whether they want to emit a constant beam of light or a flashing laser, as well as the degree of the light’s intensity. According to Beryl, a constant laser projected at the highest light capacity would last 1.5 hours, while lights emitted at lower intensities and frequencies can be used for up to 14 hours.

Once battery life is drained, the Laserlight Core can be recharged using a standard micro USB cable. In terms of specifications, the Laserlight Core weighs in at a pint-sized 100g with its bracket, with measures sitting at 10.6cm from end-to-end, 3.3cm in width and 3.9cm in height. Further, Beryl also promises an easy setup for the Laserlight Core thanks to the a bracket that clips around the handlebars — no tools required. Meanwhile, the device is also waterproof for extra protection on those wetter days. As of the time of writing, the Laserlight Core has raised $57,709 USD (approximately $74,409 CAD) on Kickstarter, exceeding its goal of $50,000 USD (roughly $64,642 CAD).

The funding period will continue until November 6th, with shipments expected to begin worldwide later that month. Pledges that include the light start at discounted rates of $69 USD (about $89 CAD), with the final retail price expected to be about $90 USD (around $116.50). Beryl is also throwing in a ‘Burner Brake’ accessory that adds a rear light to the bike in higher pledges. While many companies add on a shipping fee for international orders, Beryl is offering free shipping of the Laserlight Core to Canada. Verdict: Sticky Given the number of people that are killed or injured in bike-related accidents in Canada alone, it’s great to see a device like this that aims to improve bike safety.

This kind of technology is more than just a Kickstarter idea, too. Back in July, public bike sharing service BIXI Montreal launched a pilot test for a similar Laserlight device, also citing Transport Research Laboratory’s data on the greater cycling safety offered through laser technology. Therefore, having a company like Beryl make this tech available for wider use outside of bike sharing services is most welcome. Of course, it remains to be seen just how successful these kinds of lights are in practice, especially if they come from the sometimes-unreliable world of crowdfunding.

Conceptually, though, this definitely sounds like a good way to make roads a safer place for everyone, so here’s hoping it all pans out. Note: This post is part of an ongoing series titled Sticky or Not in which staff reporter Bradly Shankar analyzes new and often bizarre gadgets, rating them sticky (good) or not (bad).

https://www.theregister.co.uk/2018/10/08/can_neural_networks_deep_learning_and_gpus_help_your_business_now/

Can neural networks, deep learning and GPUs help your business now?
We say yes, and in 7 days we’ll show you how
By Team Register 8 Oct 2018 at 10:31

Events If you want to exploit machine learning and AI, the range of technologies and techniques available can appear dizzying.

Luckily, there’s just one week to go until we open the doors on MCubed, our highly practical two- to three-day dive into machine learning, AI and data science and what they mean for your organisation.

Whether you’re just making your initial forays into what the technology can do for your organisation, looking to sharpen up your existing development operations, or want to dive deep in key technologies, such as TensorFlow or reinforcement learning, our speakers are just what you’re looking for. You can see the full agenda here.

We also have some spaces left in our brace of workshops covering developing and deploying machine learning and using the cloud, containers and DevOps to get your project into production.

This all happens at 30 Euston Square on October 15 to 17, and because this conference is brought to you by The Register and Heise, you can ensure that the conversation will flow at lunch, and at our first night drinks party.

But time is running out. Head to the MCubed website today, and secure your place. See you next month.

And if you need a taster, check out a talk from one of our 2017 speakers, data scientist Barbara Fusinska, on TensorFlow.

https://mobilesyrup.com/2018/10/08/jarvish-ar-smart-motorcycle-helmet-ar-display-assistant-siri-alexa/

Smart helmet lets you talk to Google Assistant, see directions in AR display
“Hey Google, play my riding playlist.”

By Jonathan LamontOCT 8, 20187:00 AM EDT

These smart motorcycle helmets from Jarvish want to make your ride smarter with virtual assistants and an augmented reality heads-up display. There are two models of the Jarvish smart helmet. The first is the more “basic” model, the Jarvish X. It has integrated microphones and speakers for Siri, Google Assistant and Amazon Alexa. This makes it easy for the wearer to ask for directions, updates about the weather and control music. It also has a built-in 2K camera for recording your ride. The Jarvish X will set you back $799 USD (about $1,035 CAD).

However, the Jarvish X-AR is arguably the more impressive helmet. Along with everything in the Jarvish X, the X-AR adds a Google Glass-style AR display. The display can show things like current speed, turn-by-turn directions, weather and incoming calls. It even has a rearview video feed from a second camera on the back of the helmet. The helmet is an ambitious project. AR displays are notoriously hard to do well. Furthermore, the tech makes the helmet far more expensive at $2,599 USD ($3,366 CAD).

Additionally, the Jarvish X-AR won’t be available until later. Jarvish will run a Kickstarter in the second half of 2019 for the helmet. Those worried about whether Jarvish can deliver should rest assured. The company already sells the Jarvish X in Taiwan, according to Engadget. Overall, this looks like a solid entrance into the smart helmet space. I’m excited to see if Jarvish can pull off the AR display well. If they do, we could see more smart helmets and AR displays come to market.

https://www.pbs.org/newshour/science/could-an-artificial-intelligence-be-considered-a-person-under-the-law

Could an artificial intelligence be considered a person under the law?
Science Oct 7, 2018 10:01 AM

Humans aren’t the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law.

Legal scholar Shawn Bayer has shown that anyone can confer legal personhood on a computer system, by putting it in control of a limited liability corporation in the U.S. If that maneuver is upheld in courts, artificial intelligence systems would be able to own property, sue, hire lawyers and enjoy freedom of speech and other protections under the law. In my view, human rights and dignity would suffer as a result.

The corporate loophole

Giving AIs rights similar to humans involves a technical lawyerly maneuver. It starts with one person setting up two limited liability companies and turning over control of each company to a separate autonomous or artificially intelligent system. Then the person would add each company as a member of the other LLC. In the last step, the person would withdraw from both LLCs, leaving each LLC – a corporate entity with legal personhood – governed only by the other’s AI system.

That process doesn’t require the computer system to have any particular level of intelligence or capability. It could just be a sequence of “if” statements looking, for example, at the stock market and making decisions to buy and sell based on prices falling or rising. It could even be an algorithm that makes decisions randomly, or an emulation of an amoeba.

Reducing human status

Granting human rights to a computer would degrade human dignity. For instance, when Saudi Arabia granted citizenship to a robot called Sophia, human women, including feminist scholars, objected, noting that the robot was given more rights than many Saudi women have.

In certain places, some people might have fewer rights than nonintelligent software and robots. In countries that limit citizens’ rights to free speech, free religious practice and expression of sexuality, corporations – potentially including AI-run companies – could have more rights. That would be an enormous indignity.

The risk doesn’t end there: If AI systems became more intelligent than people, humans could be relegated to an inferior role – as workers hired and fired by AI corporate overlords – or even challenged for social dominance.

Artificial intelligence systems could be tasked with law enforcement among human populations – acting as judges, jurors, jailers and even executioners. Warrior robots could similarly be assigned to the military and given power to decide on targets and acceptable collateral damage – even in violation of international humanitarian laws. Most legal systems are not set up to punish robots or otherwise hold them accountable for wrongdoing.

What about voting?

Granting voting rights to systems that can copy themselves would render humans’ votes meaningless. Even without taking that significant step, though, the possibility of AI-controlled corporations with basic human rights poses serious dangers. No current laws would prevent a malevolent AI from operating a corporation that worked to subjugate or exterminate humanity through legal means and political influence. Computer-controlled companies could turn out to be less responsive to public opinion or protests than human-run firms are.

Immortal wealth

Two other aspects of corporations make people even more vulnerable to AI systems with human legal rights: They don’t die, and they can give unlimited amounts of money to political candidates and groups.

Artificial intelligences could earn money by exploiting workers, using algorithms to price goods and manage investments, and find new ways to automate key business processes. Over long periods of time, that could add up to enormous earnings – which would never be split up among descendants. That wealth could easily be converted into political power.

Politicians financially backed by algorithmic entities would be able to take on legislative bodies, impeach presidents and help to get figureheads appointed to the Supreme Court. Those human figureheads could be used to expand corporate rights or even establish new rights specific to artificial intelligence systems – expanding the threats to humanity even more.

Roman V. Yampolskiy is an associate professor of Computer Engineering and Computer Science at the University of Louisville. This article was originally published on The Conversation. Read the original article here.

Left: Sophia, a robot integrating the latest technologies and artificial intelligence developed by Hanson Robotics is pictured during a presentation at the “AI for Good” Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland June 7, 2017. Photo By Denis Balibouse/Reuters

Related
To beat Vegas bookies at the World Cup, these statisticians turned to artificial intelligence
By Amanda Grennell

https://mashable.com/article/how-to-walkie-talkie-watchos5-apple-watch/#4vjsdd1Lnmqc

Here’s how to use an Apple Watch as a Walkie Talkie in WatchOS 5

WatchOS 5 has been available for all users (Series 1 and newer) for a few weeks now.

Speed improvements and some new watch faces are in tow, but there is another communication method in it. You’ve always been able to get that Dick Tracy effect by taking phone calls on your wrist, but there is now a Walkie-Talkie app as well.

It’s precisely what you think it is. You can now have push-to-talk conversations with other Apple Watch users. Those who remember the push-to-talk feature on Sprint Nextel might be fond of this.

Here’s how to use Walkie-Talkie in WatchOS 5.

1. Install WatchOS 5
Walkie-Talkie is a pre-loaded app that comes with WatchOS 5. You’ll want to make sure you’re running the latest software from Apple. Open the iOS Apple Watch app > tap General > then Software Update to check.

2. Open the Walkie-Talkie app

IMAGE: JAKE KROL/MASHABLE

Tap the Digital Crown to open the beehive-styled app viewer on the Apple Watch. Look for the yellow Walkie-Talk icon.

3. Find your friends

IMAGE: JAKE KROL/MASHABLE

Opening the app for the first time will give you a brief intro and show you a long list of contacts. You can manually scroll through to find someone in your address book or look at the suggestions at the top. A massive con is that you need to navigate through a long list depending on how many contacts you have.

IMAGE: JAKE KROL/MASHABLE

Choosing a name will send an invite to that person in Walkie-Talkie. They can either accept or deny the request. You will get a notification when they’re available to talk. The contacts that you’ve connected with will live in the main screen of this app, giving you easy access to chat.

4. Tap to chat

Once you’ve added contacts, you can tap to chat. The contacts will appear in yellow boxes when they’re available, or gray when offline. If he or she is online tap the box, and then hold the yellow circle (with “talk” in the middle) to chat.

It is “push to talk” meaning that you need to hold it in for as long as your message is. The contact will then receive it within a few seconds.

You can also remove a friend at any time from this main page. Just swipe left to the left on their name and hit the “X.”

Remember, you’re always on.

IMAGE: JAKE KROL/MASHABLE

A problem with any Walkie-Talkie app by nature is that you can’t choose to accept an incoming message on a case by case basis. Being marked as available means that any chat will arrive at any moment. So if you’re in school or at the office, it might be a wise move to flick the availability to off.

Lastly, to receive Walkie-Talkie messages, you need a connection, whether it’s WiFi or LTE.

https://www.salon.com/2018/10/07/raising-animals-for-meat-creates-lots-of-problems-lab-grown-meat-could-provide-solutions_partner/

Is lab-grown meat the solution to meat’s ills?
Cultured meat is the next step in a long history of alternatives to conventional meat

Now, more than ever before, people around the world are beginning to rethink the place of traditional meats in their diets. The number of vegans in the United States has grown sixfold over the past five years and more than tripled in Portugal and the United Kingdom over the past decade. The restaurant consulting firm Baum + Whiteman has even named plant-based foods — including vegan ice cream, imitation cheeses, and mock meats—as the hottest food and beverage trend of 2018. So why are so many people dropping meat?

Everyone has their own reasons, but surveys have revealed that three broad concerns appeared to be nearly ubiquitous:

The vast majority of vegetarians changed their diet for health reasons.
Many vegetarians also adopted their diet for ethical reasons — two-thirds described feelings of obligation to protect animals or disgust towards meat products as a motivator to switch.

Finally, 59 percent of vegetarians preferred their diet for its environmental impact.

Concern over antibiotic resistance due to use in livestock has also grown as new revelations concerning the dangers of conventional (livestock-derived) meat appear in headlines daily. Livestock and meat serve as a major source of disease outbreaks today. Bird flu, anthrax, and swine flu all incubate in and spread from domesticated poultry, cows, and hogs to humans. Over 60 percent of human pathogens are zoonotic, meaning both humans and animals can contract them. In a recent example from March 2018, a listeriosis outbreak from processed meat killed upwards of 200 people and infected 1,000 more. Conventional meat’s negative health impacts do not end at infectious disease, though. Long-term processed meat consumption is associated with increased heart disease, digestive tract cancer, and type 2 diabetes across all demographic groups.

But soon, people may not need to make such drastic dietary changes if they have these concerns. Instead, by eating meat grown in a lab, people could continue to enjoy meat products while gaining some of the major advantages of a vegetarian diet. Strict environmental controls and tissue monitoring can prevent infection of cultures from the outset, and if any do get infected, they could potentially be caught before shipping to consumers. Lab-grown meat can also leverage advancements in biotechnology, eventually including increased nutrient fortification, individually-customized cellular and molecular compositions, and optimal nutritional profiles, giving it the potential to be far healthier than livestock-derived meat.

Additionally, lab-grown meat may successfully resolve long-running ethical dilemmas intrinsic to a carnivorous diet. Many ethicists believe that eating meat results in a great deal of unnecessary pain and suffering for animals. When presented with moral arguments against meat consumption, carnivores outside of academia do tend to agree with the ethicists—though not enough to change their behavior. For example, celebrity chef and noted meat connoisseur Anthony Bourdain struggled mightily with what he perceived as a major contradiction between his morality and his career. Lab-grown meat could allow these people to continue enjoying meat without lingering moral concerns.

Finally, many people simply lack the choice to forgo meat, no matter what their conscience may dictate.

In short, the process’s ethical advantages result from growing in isolation in a dish in an incubator. Muscle cells cannot think or experience anything on their own, and they do not independently develop a brain. Without consciousness, cultured meat cannot suffer. It renders the (previously unavoidable) issue of animal suffering moot, to the same degree that a vegetarian diet would.

Regardless of moral philosophy, what’s less controversial is that conventional meat production exerts a tremendous cost on the environment. It requires 30 percent of the Earth’s surface, as well as one third of its freshwater, while producing 18 percent of global greenhouse gas emissions. It takes 15 to 100 kilograms of plant matter and 41 kilograms of carbon dioxide to produce one kilogram of meat. For many nations signed to the Paris Agreement, “meat and fish companies may be ‘putting the implementation of the Paris agreement in jeopardy’ by failing to properly report their climate emissions.”

Though they disagree on the overall degree, the two existing environmental impact studies (one, two) concur lab-grown meat has the potential to be much more efficient than conventional meat production. It’s important to note these estimates were made in the infancy of meat culturing, before methods have matured, economies of scale have been established, or regulation has begun. As development continues, we can only expect meat cultures to become more efficient and the comparison even more favorable. In fact, the cost of production per kilogram has already dropped by a factor of over 4,000 in just a few years, indicating large initial increases in efficiency and scale. But current cutting-edge techniques still have a lot of room for improvement, too.

Given that roughly 95 percent of the American population eats meat (and approximately 97 percent of the UK), cultured meat stands to have a large customer base. Former vegetarians and vegans usually list boredom, social and logistical difficulties, and meat cravings as the primary reason for adding meat back into their diet; more than 80 percent of vegetarians and vegans go back to eating animal products, the majority after less than a year.

Indeed, taste is the major deciding factor for consumers who buy plant-based meat substitutes. Looking at the history of plant-based substitutes, we see the same concerns driving nearly all development. Chefs designing new plant-based meat substitutes have focused almost exclusively on making them more closely resemble meat in taste, texture, and appearance.

Tofu is the oldest and most popular meat substitute both today and throughout history. It arose in China over a thousand years ago, following the spread of Buddhism into the country. Some Buddhists believe meat shouldn’t be consumed, largely due to animal cruelty. Legend has it that Chinese chefs invented tofu to replace meat in most dishes, calling it “vice mayor’s mutton.” It quickly became popular in China and spread throughout East Asia, where it remains a staple to this day.

Chefs in Asia developed a second substitute in the fifth century CE. By washing and kneading wheat dough until all the starch came out, they produced loaves of gluten, the main protein in wheat. The loaves have a stringy texture and a savory taste, similar to poultry, and became known as seitan. The loaves can be cut and seasoned to closely resemble slices of meat in both taste and texture. Seitan has become a staple in Asian cuisine, referred to as various “mock” meats, most commonly mock duck and mock chicken. Though seitan resembles meat far more closely than tofu, it still doesn’t stand in well for red and processed meats.

Then in the U.K. in the 1990s, food manufacturer Quorn entered the mock meat marketplace. They produce frozen foods made from mycoprotein, a form of protein sourced from single-celled fungi that is easily bound together and pressed into various shapes. Using mycoprotein, they produce substitutes for most processed and frozen meats, including nuggets, burgers, and meatballs, with similar appearances, textures, and flavors. Mycoprotein can only faithfully reproduce the features of these lower-grade frozen meats, limiting the range of their approach.

We’re in the midst of the most recent wave of meat substitute development, which began in the early 2010s. Two companies are attempting to closely replicate the feeling of unfrozen, unprocessed cuts of meat, using burgers as an initial proof-of-concept. The first company, Beyond Meat, uses pea protein mixed with fats in the same proportion as found in ground beef, allowing them to mimic the taste and feel of conventional ground beef and burgers. The second company, Impossible Foods, uses mixes of different protein and fat molecules to find tastes and textures most similar to meats products. They then add plant-derived heme, a molecule in blood that gives it its red color and distinct smell, to give a more natural, bloody aesthetic to their meats. Both of these companies have raised tens of millions of dollars from investors and begun to roll their products out to various grocery stores and fast food chains.

Both of these cultured meat products closely match the appearance, texture, and taste of regular meat, addressing the most common concern about going meatless. Consumers appear to agree: multiple surveys have shown that 40 to 70 percent of people would be willing to switch to cultured meat.

So when we discuss alternatives to conventional meat, we are not discussing a brand-new phenomenon. Cultured meat is the next step in a long development process, and could replace conventional meat with a healthier, more ethical, and more sustainable alternative.