https://scitechdaily.com/new-tool-to-study-molecular-structures-uses-a-laser-a-crystal-and-light-detectors/

New Tool to Study Molecular Structures Uses a Laser, a Crystal and Light Detectors

Artist's Representation of Complementary Vibrational Spectroscopy

Researchers have built a new tool to study molecules using a laser, a crystal and light detectors. This new technology will reveal nature’s smallest sculptures – the structures of molecules – with increased detail and specificity.

“We live in the molecular world where most things around us are made of molecules: air, foods, drinks, clothes, cells and more. Studying molecules with our new technique could be used in medicine, pharmacy, chemistry, or other fields,” said Associate Professor Takuro Ideguchi from the University of Tokyo Institute for Photon Science and Technology.

The new technique combines two current technologies into a unique system called complementary vibrational spectroscopy. All molecules have very small, distinctive vibrations caused by the movement of the atoms’ nuclei. Tools called spectrometers detect how those vibrations cause molecules to absorb or scatter light waves. Current spectroscopy techniques are limited by the type of light that they can measure.

Schematic of Complementary Vibrational Spectroscopy

The new complementary vibrational spectrometer designed by researchers in Japan can measure a wider spectrum of light, combining the more limited spectra of two other tools, called infrared absorption and Raman scattering spectrometers. Combining the two spectroscopy techniques gives researchers different and complementary information about molecular vibrations.

“We questioned the ‘common sense’ of this field and developed something new. Raman and infrared spectra can now be measured simultaneously,” said Ideguchi.

Previous spectrometers could only detect light waves with lengths from 0.4 to 1 micrometer (Raman spectroscopy) or from 2.5 to 25 micrometers (infrared spectroscopy). The gap between them meant that Raman and infrared spectroscopy had to be performed separately. The limitation is like trying to enjoy a duet, but being forced to listen to the two parts separately.

Complementary vibrational spectroscopy can detect light waves around the visible to near-infrared and mid-infrared spectra. Advancements in ultrashort pulsed laser technology have made complementary vibrational spectroscopy possible.

Complementary Vibrational Spectra of Toluene

Inside the complementary vibrational spectrometer, a titanium-sapphire laser sends pulses of near-infrared light with the width of 10 femtoseconds (10 quadrillionths of a second) towards the chemical sample. Before hitting the sample, the light is focused onto a crystal of gallium selenide. The crystal generates mid-infrared light pulses. The near- and mid-infrared light pulses are then focused onto the sample, and the absorbed and scattered light waves are detected by photodetectors and converted simultaneously into Raman and infrared spectra.

So far, researchers have tested their new technique on samples of pure chemicals commonly found in science labs. They hope that the technique will one day be used to understand how molecules change shape in real time.

“Especially for biology, we use the term ‘label-free’ for molecular vibrational spectroscopy because it is noninvasive and we can identify molecules without attaching artificial fluorescent tags. We believe that complementary vibrational spectroscopy can be a unique and useful technique for molecular measurements,” said Ideguchi.

###

Reference: “Complementary vibrational spectroscopy” by Kazuki Hashimoto, Venkata Ramaiah Badarla, Akira Kawai and Takuro Ideguchi, 27 September 2019, Nature Communications.
DOI: 10.1038/s41467-019-12442-9

https://www.zdnet.com/article/darpa-our-goal-is-100x-faster-network-card-for-tomorrows-ai/

DARPA: Our goal is 100x faster network card for tomorrow’s AI

DARPA is asking for help to create the super-fast network interface card that industry has so far failed to produce.

https://futurism.com/the-byte/oral-sex-toy-ai-deep-learning

THIS AI-POWERED ORAL SEX ROBOT PUTS THE “DEEP” IN DEEP LEARNING AUTOBLOW

The artificial intelligence revolution is coming to a sex toy shop near you — if you’re into that kind of thing.

Sex toy outfit Autoblow is taking pre-orders for its fourth-generation “Autoblow” device, dubbed “Autoblow AI,” which promises to harness the power of AI to provide users with the “perfect blowjob,” according to the company.

Worryingly, you’ll have to plug the gadget into an outlet. Autoblow says its AI was trained by a group of people in Serbia, who used a specialized browser plugin to, uh, simulate the up and down movement using their mouse, as Engadget reports.

Autoblow used that training data to develop a deep learning algorithm that simulates the ideal oral sex experience. The result: patterns with lascivious names like “teasing slow stroke” and “intense edge.” The cost of one of their newfangled blowjob machines: a measly $259 on Indiegogo.

The Autoblow AI isn’t the only sex toy trying to leverage the power of AI for your pleasure. A vibrator called HUM, first released in 2014, billed itself as the first AI-powered sex toy, capable of adjusting its vibration patterns based on body movement.

That’s the big-picture promise of this nascent AI sex toy industry: a do-it-yourself experience that makes decisions for you, instead of asking you to press buttons or adjust position. Is it a breakthrough or simply a gimmick to sell more units? Only time will tell.

The challenges are substantial. Everyone’s body is different, and responds to stimulation in different ways. And masturbation, ultimately, is an act of self discovery, as Motherboard points out — something that a plug-and-play sex toy experience could detract from.

READ MORE: Autoblow AI is a sex toy that promises ‘surprise’ [Engadget]

More on the future of sex: BMW Posts, Deletes Ad About Sex Inside Self-
Driving Cars

IEWED CONTENT FROM PARTNERS WE LOVE

http://www.sci-news.com/genetics/genes-age-related-hearing-loss-07636.html

Researchers Identify 44 Genes Involved in Age-Related Hearing Loss

Sep 27, 2019 by News Staff / Source
|

A team of scientists from King’s College London, University College London and the University of Manchester has identified 44 genes linked to age-related hearing loss.

Wells et al performed genome-wide association studies for two self-reported hearing phenotypes, using more than 250,000 UK Biobank volunteers aged between 40 and 69 years. Image credit: Pexels.

Age-related hearing impairment is the most common sensory impairment in the aging population; a third of individuals are affected by disabling hearing loss by the age of 65.

It causes social isolation and depression and has recently been identified as a risk factor for dementia.

Despite being a common impairment in the elderly, little is known about the causes of the hearing loss and the only treatment option available is hearing aids which are often not worn once prescribed.

“We now know that very many genes are involved in the loss of hearing as we age,” said King’s College London Professor Frances Williams, co-lead author of the study.

“This study identified a few genes that we already know cause deafness in children, but it also revealed lots of additional novel genes which point to new biological pathways in hearing.”

In the study, Professor Williams and colleagues analyzed the genetic data from over 250,000 participants of the UK Biobank aged 40-69 years to see which genes were associated with people who had reported having or not having hearing problems on questionnaire. In total, the researchers identified 44 genes.

“Before our study, only five genes had been identified as predictors of age-related hearing loss, so our findings herald a nine-fold increase in independent genetic markers,” said University College London’s Dr. Sally Dawson, University College London.

“We hope that our findings will help drive forward research into much-needed new therapies for the millions of people worldwide affected by hearing loss as they age.”

The study authors now plan to investigate how each identified gene influences the auditory pathway, providing opportunities to develop new treatments.

The findings were published in the American Journal of Human Genetics.

_____

Helena R.R. Wells et al. GWAS Identifies 44 Independent Associated Genomic Loci for Self-Reported Adult Hearing Difficulty in UK Biobank. American Journal of Human Genetics, published online September 26, 2019; doi: 10.1016/j.ajhg.2019.09.008

https://www.wired.co.uk/article/apple-glasses-augmented-reality

Apple’s dropped some huge hints about its first AR glasses

Through iOS 13 and iOS 13.1, Apple has been leaking some information about its first smart glasses. But we still don’t know when they’re coming


Getty Images / Bloomberg / Contributor

With all the phone and watch and TV and game and chip and other chip news coming out of Apple’s big event last week, it was easy to forget the company’s longest-running background process: an augmented-reality wearable. That’s by design. Silicon Valley’s advent calendar clearly marks September as the traditional time for Apple to talk finished hardware, not secretive projects.

But those secretive projects have a weird habit of poking their heads into the light. A slew of features and language discovered recently inside iOS 13 and 13.1 seem to explicitly confirm the very thing Apple executives have steadfastly refused to acknowledge—an honest-to-Jobs AR headset. In fact, taken in conjunction with acquisitions and patent filings the company has made over the past several years, those hidden features have painted the clearest view yet of Apple’s augmented dreams.

Hard to StarBoard

First came StarBoard. At the very beginning of September, a leaked internal build of iOS 13 was found to contain a “readme” file referring to StarBoard, a system that allows developers to view stereo-enabled AR apps on an iPhone. The build also included an app called StarTester to accomplish exactly that. That marked the first explicit mention stereo apps—i.e., those that output to separate displays, like those found in AR/VR headsets—in Apple material.

Not long after, on the day of the hardware event, Apple released Xcode 11, the newest version of the company’s macOS development environment. Inside that set of tools lurked data files for what appeared to be two different headsets, codenamed Franc and Luck. The same day, iOS developer Steve Troughton-Smith found the StarBoard framework in the official “golden master” of iOS 13; he also pointed out references to “HME,” which many speculated stood for “head-mounted experience.” (HMD, or head-mounted display, is a common term for a VR/AR headset.)

So far, so unprecedented. When Apple first released ARKit in 2017, it was the beginning of a long journey to familiarise developers with augmented reality and get them playing with the possibilities. Yet, the company has always been careful to situate AR as a mobile technology, people peeking through iPhones or iPads to shop or play with Legos, or even experience public art installations. Finding this kind of data, even hidden deep within OS developer files, marks an uncharacteristic transparency from Apple—as though the company is planning something sooner rather than later.

What that thing might be depends who you ask. Reports from Bloomberg News and Taiwanese analyst Ming-Chi Kuo have long claimed that Apple would be beginning production on an AR headset this year for release in 2020—one that acts more like a peripheral than an all-in-one device, depending on the iPhone to handle the processing power.

Troughton-Smith came to a similar conclusion after poking through iOS 13. “The picture of Apple’s AR efforts from iOS 13 is very different to what one might expect,” he tweeted. “It points to the headset being a much more passive display accessory for iPhone than a device with an OS of its own. The iPhone seems to do everything; ARKit is the compositor.”

That idea of a passive display accessory got fleshed out late last week, when another developer got StarTester up and running on a beta of iOS 13.1, which officially comes out today.

xSnow@__int32

@stroughtonsmith Managed to get into apple glasses test mode (aka StarTester mode) in 13.1 beta 3 on iPhone X, but right-eye view in glitchy. (Scene contents are from my area light test, not from StarBoard)

Embedded video

102 people are talking about this

That person also found specific numbers in the iOS framework referring to the fields of view for the two specific headset codenames: 58 and 61 degrees for Luck and Franc, respectively. (A third codename, Garta, seems to refer to a testing mode rather than a specific device.)

All of which matches up with the thought that Apple is planning a small, lightweight product—one that lives up to the term “wearable” by being more like smart glasses instead of an unwieldy Microsoft HoloLens. “Fifty-eight degrees doesn’t sound like much compared to an Oculus Rift, but compared to an nreal Light, which is 52 degrees, it’s already pretty competitive,” says JC Kuang, an analyst with AR/VR market intelligence firm VRS. “That’s the exact class of product we need to be looking at when we talk about what the architecture might look like.”

Mark Boland, chief analyst at ARtillery Intelligence, which tracks the augmented-reality mark, calls such a product a “notification layer,” and posits it as an introductory device of sorts—one that acts as a bridge between the mobile AR of today and a more powerful headset that could ultimately replace the smartphone. “I’ve always been skeptical of 2020,” he says. “If you look across the industry at the underlying tech, it’s just not ready to build something sleek and light.” However, an intermediary device like the one iOS 13 seems to point to could strike a balance, giving developers the chance to get used to building stereo experiences and develop best practices before needing to fully integrate with the “mirror world.”

A recent patent seems to support the idea as well. “Display System Having Sensors,” which Apple filed in March and was published in July, describes a companion system: a head-mounted device with inward- and outward-facing sensors feeds its inputs to a “controller,” which then “render[s] frames for display by the HMD.” A patent isn’t the same as a plan, obviously, but it’s a hell of a data point.

From Here to ARternity

How Apple gets from phone-tethered smart-glasses to something a fully realised spatial-computing platform—or how long it takes to do so—remains unclear, but elements of the road map are hidden in plain sight. “A lot of the tech they’ve already built and fully deployed is critical to their goal of building a discreet AR HMD platform,” Kuang says. As an example, he points to last week’s announcement that the iPhone 11 models could take photos of pets in Portrait Mode: “That’s a good example of them working in little tweaks that don’t appear to have relevance to AR, but are super-meaningful if you’re a developer. The ability to recognise nonhuman faces significantly expands your ability to build tools and experiences.”

Two acquisitions Apple has made in recent years also suggest how the company might get there. Kuang traces the current StarBoard testing mode to the 2017 acquisition of a company called Vrvana. At the time, Vrvana’s chief product was a mixed-reality headset—however, rather than rely on a transparent “waveguide” display like those in the HoloLens or Magic Leap One, it used front-facing cameras to deliver passthrough video to the user. (This is also how a company like Varjo delivers mixed reality using an VR headset.)

“It ruffled some feathers because nobody was really down with a discreet headset using pass-through,” Kuang adds of Vrvana. “But the StarBoard stuff presents exactly that: a Google Cardboard sort of functionality for iPhones. It’s obviously for testing purposes, but it maybe gives us a little more insight into how Apple has been testing AR without having to resort to building a couple of hundred waveguide-enabled devices for testing purposes.”

Apple’s other strategic move, buying Colorado company Akonia Holographics in 2018, looks to have two possible reasons: not just for the waveguide displays that Akonia was working on, but for the “holographic storage” that was the company’s original goal. The term, which refers to storing and accessing data in three dimensions rather than on the surface of a material (optical storage), has long eluded commercialisation, but could prove pivotal to the long-term vision of AR. “The utopian vision of the end user device is super-lightweight and does functionally no computing compared to where we currently are,” Kuang says. “Everything happens on the cloud. The kind of speed and transfer that comes with holographic storage could be a key part of that.”

Kuang points to another recent Apple patent, published just last week, proposing an AR display that delivers three-dimensional imagery through an Akonia-like waveguide system. In his view, it confirms the company’s commitment to getting past the limitations of today’s devices—particularly the eyestrain that results from trying to focus on virtual objects and real-world ones at the same time.” The fact that Apple is acknowledging it’s a big problem and intends to fix it is huge,” he says. “It’s more than Microsoft can be said to be doing.”

It also suggests that while the iOS discoveries speak to an interim device, they’re also likely only just the beginning. Much has been made of Apple’s push into services to offset declining iPhone revenue; subscriptions like Arcade and TV+ are steps toward the company’s stated goal of making more than $50 billion from such services annually. But that doesn’t solve the question of what comes after the phone—and Boland sees AR as an integral part of any “succession plan” for Apple.

Kuang agrees. “It’s a very forward-looking vision for AR,” he says of Apple’s approach. “They’re treating it as a computing modality rather than a display modality, which is critical.”

This story was originally published in WIRED US

https://medicalxpress.com/news/2019-09-brain-therapy.html

A model for brain activity during brain stimulation therapy

 A model for brain activity during brain stimulation therapy
Graphical Abstract Optimizing direct electrical stimulation for the treatment of neurological disease remains difficult due to an incomplete understanding of its physical propagation through brain tissue. Here, we use network control theory to predict how stimulation spreads through white matter to influence spatially distributed dynamics. We test the theory’s predictions using a unique dataset comprising diffusion weighted imaging and electrocorticography in epilepsy patients undergoing grid stimulation. We find statistically significant shared variance between the predicted activity state transitions and the observed activity state transitions. We then use an optimal control framework to posit testable hypotheses regarding which brain states and structural properties will efficiently improve memory encoding when stimulated. Our work quantifies the role that white matter architecture plays in guiding the dynamics of direct electrical stimulation and offers empirical support for the utility of network control theory in explaining the brain’s response to stimulation. DOI:https://doi.org/10.1016/j.celrep.2019.08.008

Brain stimulation, where targeted electrical impulses are directly applied to a patient’s brain, is already an effective therapy for depression, epilepsy, Parkinson’s and other neurological disorders, but many more applications are on the horizon. Clinicians and researchers believe the technique could be used to restore or improve memory and motor function after an injury, for example, but progress is hampered by how difficult it is to predict how the entire brain will respond to stimulation at a given region.

In an effort to better personalize and optimize this type of therapy, researchers from the University of Pennsylvania’s School of Engineering and Applied Science and Perelman School of Medicine, as well as Thomas Jefferson University Hospital and the University of California, Riverside, have developed a way to model how a given patient’s  will change in response to targeted .

To test the accuracy of their model, they recruited a group of study participants who were undergoing an unrelated treatment for severe epilepsy, and thus had a series of electrodes already implanted in their brains. Using each individual’s brain activity data as inputs for their model, the researchers made predictions about how to best stimulate that participant’s brain to improve their performance on a basic memory test.

The participants’ brain activity before and after stimulation suggest the researchers’ models have meaningful predictive power and offer a first step towards a more generalizable approach to specific stimulation therapies.

The study, published in the journal Cell Reports, was led by Danielle Bassett and Jennifer Stiso, a neuroscience graduate student and a member of Bassett’s Complex Systems Lab.

Memory is just one area where electrical stimulation therapy is thought to have promise, but more basic research on how to coax brain activity into the desired state is needed.

“There are patterns of activity across the entire brain when you’re remembering something, and those patterns look different depending on how well you’re doing,” Stiso says. “We know electrical stimulation can change those patterns, but how to change them into the pattern associated with better performance isn’t clear.”

With a collaboration between neurologists at the Hospital for the University of Pennsylvania and Thomas Jefferson University Hospital, the researchers found epilepsy patients with implanted electrodes who were willing to volunteer to receive targeted stimulation during direct brain recordings while they awaited surgery.

While their brain activity was being recorded, the patients participated in basic memory tests, which entailed hearing and recalling a list of words.

“We then used features of those individuals’ brain activity, and the pattern of connections between their brain regions, to model what would happen to the whole brain when we stimulate a specific region,” Stiso says. “A lot of memory research is focused on specific regions of the brain, like the hippocampus, but we think these therapies need to take a much larger network of regions into account.”

Combined with data from other stimulation experiments conducted with these epilepsy patients, these models suggested which specific patterns of brain activity and connections would be most beneficial when the stimulation was targeted to improve memory.

“Using in silico experiments, we found that stimulation that had been optimized based on the model pushed the brain towards states of good  performance,” says Bassett.

“Ultimately, we’re trying to figure out where and how much to stimulate each person’s brain to reach the -wide patterns associated with the specific goal of a given therapy,” Stiso says. “This type of study is an important, early step towards developing a fast, generalizable  of an individual’s response to a specific stimulation .”

https://www.makeuseof.com/tag/raspberry-pi-media-server-emby/

  Pinterest 

Looking for a smart, easy-to-use Raspberry Pi media server solution with a good choice of client apps? Perhaps you looked at Plex or Kodi but found they didn’t seem right. If so, it’s worth giving Emby a go.

Easy to install and set up, Emby is a smart media server alternative. Here’s how to install Emby Server and Emby Theater on the Raspberry Pi.

What Is Emby?

Emby is a media server. While it isn’t as well-known as other solutions (e.g. Plex, or Kodi), open source Emby has client and server software. This means that you can install the server module on the computer with your media on it, then share to other devices using client apps.

Various plugins can extend the features of Emby. You’ll find IPTV plugins for internet TV, for example. Emby also offers built-in parental controls, to help protect your family from sensitive content. While Emby is less well-known than its competitors, the userbase is growing.

For more information, here’s why you should forget Plex and Kodi, and try Emby instead.

Forget Plex and Kodi, Try Emby Instead Forget Plex and Kodi, Try Emby InsteadWe have previously spent much time discussing Plex and Kodi, and will continue to do so in the future. But, there is a third option in the form of Emby.READ MORE

What You Need for a Raspberry Pi Emby Media Center

To build an Emby media server, you will need:

  • Raspberry Pi 2 or later (we used the Raspberry Pi 4)
  • microSD card (16GB or more for the best results)
  • PC with a card reader
  • Keyboard and mouse
  • HDMI cable and suitable display

Make sure you have a suitable power connector for your Raspberry Pi.

The process is straightforward: install Emby, connect it to your network, then use it as a media server. Media stored on a USB hard disk drive can be added to Emby, then served to devices on your network.

For example, the Raspberry Pi Emby box could serve your favorite home movies and photos to your TV or mobile.

Install the Emby Media Server on Raspberry Pi

Installing the Emby Server on Raspberry Pi’s default Raspbian Buster is straightforward. Open a terminal and update and upgrade to begin:

sudo rpi-update

sudo apt dist-upgrade

Next, use wget to download the ARMHF version from the Linux downloads page; this version is compatible with the Raspberry Pi.

wget https://github.com/MediaBrowser/Emby.Releases/releases/download/4.2.1.0/emby-server-deb_4.2.1.0_armhf.deb

Install this with

dpkg -i emby-server-deb_4.2.1.0_armhf.deb

Wait while this completes. Your Raspberry Pi-powered Emby server is installed. All you need to do now is configure it.

Configure Your Emby Media Server

Access the Emby Server via your browser. It’s easiest using the Raspberry Pi itself—use the address http://localhost:8096.

This brings you to the server set up. You’ll need to set the preferred language, username, password, and other options. Setup also gives you the option to link your Emby Connect account. This is a great way to connect to your server from any Emby account, without needing the IP address. However, it’s not necessary.

Following this, you’ll see the Setup your media libraries screen. Here, click Add Media Library.

Add media to Emby

Simply browse for the location, then set the meta information according to the menus.

Emby library meta settings

This is mostly language-based and shouldn’t take long.

Configure Meta settings on Emby

When you’re done adding media locations, click Save. It’s time to start viewing content on your Emby media server!

Connect Any Device to Your Emby Server

An impressive collection of apps is available for Emby. Want to watch on a smart TV? You can! You’ll also find apps for Android TV and Amazon Fire TV, along with Xbox One and PS4. Using Kodi? There’s an Emby add-on available.

Additionally, Emby produce mobile apps for Android and iOS mobile devices. There is even a version for Windows 10 and Windows 10 Mobile, as well as a HTML5 web client.

In short, all devices are covered.

To enjoy content on your Emby server, simply install the app and proceed through the setup. You’ll be asked for the server or device name, and if you set it up, the Emby account credentials.

Once this is all configured, you’ll be ready to enjoy streamed media from your Raspberry Pi Emby server.

Set Up Raspberry Pi as an Emby Client

Thanks to the Emby Theater tool for Linux, you can view the media files on your Raspberry Pi Emby server on another Pi.

Set up Emby Theater on Raspberry Pi

You have two choices to install the Emby Theater client app on a Raspberry Pi.

  1. Download the DEB file to your Raspberry Pi and install it on Raspbian Buster (or any Debian-based operating system).
  2. Alternatively, download a full disk image, write it to a spare SD card, and boot this.

Use the appropriate download link based on how you plan to install Emby on your Pi.

DownloadEmby Theater DEB File for Raspbian Buster

DownloadEmby Theater Disk Image

With your chosen download complete, it’s time to install Emby.

How to Install Emby Theater on Raspbian Buster

Get started with Emby Theater by downloading the DEB file from GitHub. This should be downloaded direct to your Raspberry Pi, or to a location from it can be copied to the Pi.

Next, open the terminal and update and upgrade:

sudo rpi-update

sudo apt dist-upgrade

Next, run the installation command:

sudo apt install -f ./emby-theater_3.0.9_armhf.deb

Then reboot:

reboot

Finally, run Emby with

emby-theater

Want Emby Theater to autostart when you boot your Pi? That’s not a problem. In the terminal, edit the autostart file:

sudo nano ~/.config/lxsession/LXDE-pi/autostart

Scroll to the end and add:

@emby-theater

Save and exit (Ctrl + X, then Y) and restart your Raspberry Pi. Emby should automatically start. Of course, if you want this functionality then it’s smarter to just install the disk image.

Install Emby Theater From a Disk Image

To turn your Raspberry Pi into a dedicated Emby client, download the zipped disk image to your main PC.

Next, unzip the file. By now you should have your Raspberry Pi’s SD card inserted in your PC’s card reader.

Launch Etcher, then click Select image to browse for the IMG file. Ensure that the correct drive is selected (Etcher is good at autodetecting flash media but check regardless) then Flash. Etcher will format the media and write the Emby disk image.

Install Emby Theater on Raspberry Pi with Etcher

A notification will appear when done. Close the software, safely eject the SD card, then replace it in your Raspberry Pi. The computer should boot straight into Emby Theater.

Stream Content on Raspberry Pi With Emby

Emby brings a whole new dimension to serving media on a Raspberry Pi. To start off, it’s compatible with the Raspberry Pi 4, and the hardware boost this delivers over its predecessors.

Browse files on your Raspberry Pi Emby server

But there’s more to Emby. Don’t want the bells and whistles of a Raspberry Pi Plex server? Find that streaming content on Kodi isn’t as smooth as you would like? Don’t worry. Emby has a clearer focus to sharing media on your network. Sure, you can upgrade to the Emby subscription program for additional features, but you probably won’t need these.

With apps for virtually any device, you’ll find Emby is perfectly suited to the Raspberry Pi and your media files.

Considering alternatives? Here are more ways you can set up your Raspberry Pi as a media server.

7 Ways to Set Up Your Raspberry Pi as a Media Server 7 Ways to Set Up Your Raspberry Pi as a Media ServerWant to install a Raspberry Pi media server? Not sure whether to choose Kodi, Plex, or Emby? Here’s what you need to know.READ MORE

Explore more about: EmbyMedia ServerRaspberry Pi.

Enjoyed this article? Stay informed by joining our newsletter!

https://www.inc.com/justin-bariso/take-this-5-minute-test-to-see-if-you-have-high-emotional-intelligence.html

Take This 5-Minute Test to See if You Have High Emotional Intelligence

How do you know if you have high emotional intelligence? This short test will point you in the right direction.

GETTY IMAGES

What does it mean to be emotionally intelligent?

That’s a question I get asked a lot. I’ve spent years diving deep into the topic of emotional intelligence, and last year I wrote EQ Applied, which takes a practical look at what emotional intelligence means in the real world.

The truth is, much like what we think of as “traditional” intelligence, emotional intelligence is complex, with various facets and skills.

So, how do you know if you have high emotional intelligence?

This five-minute test can point you in the right direction:

Do I take time to get to know myself?

Emotional intelligence begins with self-awareness. That’s because once you understand how emotions affect you and your behavior, you can begin to manage your emotions effectively–leading to better decision making. You’ll also learn how to identify and understand others and their emotions.

People with high emotional intelligence take time to ponder questions like:

  • What are my emotional triggers?
  • When I say or do something I later regret, how could I have handled things differently?
  • How does my present mood affect my words and actions?
  • How do I act differently when I’m in a great mood? How about when I’m in a lousy mood?
  • Am I open to other perspectives? Or am I too easily swayed by others?

These questions are just examples, but they give you an idea of how emotionally intelligent people get to know themselves well.

Do I try to control my thoughts?

We all have thoughts pop into our head that we don’t like–maybe they’re negative, self-defeating, or tempting you to do something you know is wrong. It can seem impossible to control those thoughts.

But as the old saying goes: You can’t keep a bird from landing on your head, but you can stop it from building a nest.

In other words, those with high emotional intelligence refuse to dwell on the negative. Instead, they work hard to replace unwanted thoughts with positive ones.

Do I think before I speak?

This seems easy, but it’s not. We’ve all been guilty of sending an angry email, or sticking a foot in our mouths because we didn’t pause to think before saying something out loud.

But emotionally intelligent people learn from those mistakes. They practice the pause, taking a moment to think things through before offering a response. Sometimes that means a few seconds; sometimes it means counting to 10. And sometimes it means taking a short walk.

But it’s all about acting intentionally, and not making permanent decisions based on temporary emotions.

Do I learn from negative feedback?

Nobody enjoys being criticized, but emotionally intelligent people have the ability to control their responses. They recognize that negative feedback is often rooted in truth, so they ask themselves:

  • Putting my personal feelings aside, what can I learn from this feedback?
  • How can I use it to grow?

Emotional intelligence also helps you realize that even when criticism is unfounded, it gives you a window into the perspective of others. Because if one person thinks that way, you can bet there are countless others who do too.

Do I acknowledge others?

With a slight nod of the head, a smile or a simple hello, emotionally intelligent people show respect by acknowledging a person’s presence. They acknowledge the point of view of others by thanking them for expressing themselves and asking questions to make sure they understood correctly.

All of this contributes to effective communication and stronger relationships.

Do I have a balanced view of myself?

Emotionally intelligent people recognize they have strengths and weaknesses.

Because of this, they appreciate a compliment without letting it get to their head. And they strive to balance self-confidence with humility.

Do I listen for the message, and not just the words?

Paying attention to body language, eye movement, and tone of voice helps emotionally intelligent people to distinguish what’s going on in others.

But they also realize they can’t always read others accurately–so they use sincere questions and discernment to help them learn.

Am I authentic?

Those with high emotional intelligence realize they don’t have to share everything about themselves with everyone, all of the time. But they do say what they mean, mean what they say, and stick to their values and principles.

They recognize that not everyone will appreciate their thoughts and opinions. But they know the ones who matter will.

Do I show empathy?

Emotionally intelligent people try to understand the thoughts and feelings of others. Instead of judging or labeling them, they work hard to see things through their eyes.

They also realize that to show empathy doesn’t always mean to agree. Rather, it’s about learning and understanding

Do I praise others?

Everyone needs to feel appreciated. When you commend others for who they are or what they’ve done, you fill that need–and build trust in your relationship.

Do I give helpful feedback?

If you have high emotional intelligence, you recognize the potential negative feedback has to cause pain to others.

Instead of criticism, high-EQ individuals reframe criticism as constructive feedback. In this way, they help recipients to see their words as an attempt to help, not harm.

Do I willingly apologize?

They can be two of the hardest words to say: “I’m sorry.”

But emotional intelligence helps you see these words are necessary in any healthy relationship. And it helps you to see that apologizing doesn’t always mean you’re wrong. It just means valuing the other person more than your ego.

Do I forgive and forget?

When you grow your emotional intelligence, you learn that long-term resentment is extremely harmful–to you. It’s like leaving a knife inside a wound, never giving yourself the chance to heal.

But when you learn to let go, you don’t allow others to hold your emotions hostage. And that lets you move on.

Do I keep my commitments?

Nowadays, people break their word all the time. “Yes” means “possibly,” “maybe” means “probably not,” and “I’ll think about it” means “start looking for someone else.”

But those with a high EQ think twice before committing, to avoid under-delivering or letting others down. And when they do commit, they keep their word, in both big and small ways. This makes them dependable and reliable in the eyes of others.

Do I know how to handle negative emotions?

Negative emotions, like anger and sadness, can be useful if managed effectively. For example, they can alert us to changes that we need to make.

Emotionally intelligent people don’t ignore these feelings, nor do they let them run wild. Instead, they work to understand them and determine strategies to deal with them in a positive way.

Do I practice self-care?

Emotionally intelligent people know they perform better in all areas of life when they take time to renew themselves.

That’s why they schedule time for themselves, throughout the day, week, month, year.

Do I focus on what I can control?

When emotionally intelligent people face circumstances out of their control, they focus on what they can influence: their priorities, their reactions, their habits.

This contributes to peace of mind and better decision making.

How did you do?

The truth is, all of us possess a degree of emotional intelligence. While few can say an unquestionable yes to all of the questions above, this test can give you an idea of where your strengths and weaknesses are.

Armed with that knowledge, you can determine areas where you need more work. And you can also identify skills where you excel–and use them as leverage to develop weaker areas.

Do this effectively, and you’ll truly make emotions work for you, instead of against you.

https://www.livekindly.co/don-lee-farms-vegan-burgers-launch-costco/

VEGAN ‘BETTER THAN BEEF’ BURGERS ARE NOW AT COSTCO

Don Lee Farms just launched its new “Better Than Beef” vegan burgers at Costco; the plant-based burgers are said to taste like traditional beef.

Liam Pritchett
Staff Writer, LIVEKINDLY | Bristol, United Kingdom | Contactable via: liam@livekindly.com

Don Lee Farms Better Than Beef vegan burgers have just launched at Costco.

The American multinational warehouse chain Costco will sell Don Lee Farms’ Better Than Beef Burgers at stores including Alaska, Idaho, Montana, Oregon, Utah, and Washington. Californian food company Don Lee Farms specializes in plant-based patties that sizzle on the grill like raw beef.

“Our new Better Than Beef Burger delivers on the experience and satisfaction of beef’s aroma, texture, flavor and juiciness,” says Danny Goodman, the Head of Development at Don Lee Farms in a press release.

“With the lowest calories, fat and saturated fat on the market. We can’t wait to get this burger in more hands as we expand our brand into retail markets,” adds Goodman.

Don Lee Farms “bleeding” organic and gluten-free burger also launched at Costco last February, where it was sold from the frozen meat section alongside animal products. The company announced that in less than 60 days one million plant-based patties had been sold.

“This new burger, along with our growing vegan line launched fifteen years ago, reinforces our family’s commitment to continuous innovation in plant-based proteins,” says Goodman.

Don Lee Farms ‘Better Than Beef’ Vegan Burgers Launch At Costco
“Better on the grill. Better for the planet,” says Don Lee Farms ¦ @donleefarms

Consumers Demand Vegan Meat

Restaurants saw a 268 percent increase in plant-based meat sales just last year. The fast-growing category is expected to reach $85 billion by 2030. According to The Good Food Institute (GFI) and the Plant-Based Foods Association (PBFA), vegan meat surpasses the popularity of cheese, ice cream, and even dairy-free milk.

Vegan meat alternatives are particularly popular with flexitarians who want to cut down their meat consumption or try new things. According to Nielsen data, 98 percent of those who buy meat-alternatives regularly are also regular purchasers of animal products. It is primarily this flexitarian market that is driving the production of new vegan products.

In addition to Don Lee Farms products, Costco has a large and affordable range of vegan items. The chain has even replaced its classic hot dog with an expanded menu of healthy and veggie options and sells a huge variety of other plant-based meats. Gardein chicken fingers, a range of MATCH Meat products, and Field Roast sausages are all available at Costco.