https://www.timesofisrael.com/children-with-autism-suffer-from-lower-sleep-depth-study-shows/

Children with autism suffer from lower ‘sleep depth,’ study shows

Ben-Gurion University researchers demonstrate that brain waves of children with autism are ‘shallower’ than those of typically developing children as they sleep

Illustrative: A sleeping child (CC BY-SA 3.0 by Stokedsk8erboy, Wikimedia Commons)

Illustrative: A sleeping child (CC BY-SA 3.0 by Stokedsk8erboy, Wikimedia Commons)

A large percentage of children with autism have a hard time falling asleep, wake up frequently in the middle of the night, and wake up early in the morning.

Now, a new study from Ben-Gurion University of the Negev’s National Autism Research Center of Israel shows that the brain waves of children with autism are weaker, or “shallower,” during sleep than those of typically developing children, particularly during the first part of the night. This indicates that the children have trouble entering the deep-sleep phase, which is the most critical aspect of achieving a restful and rejuvenating sleep experience.

The study was reported in the journal Sleep earlier this month.

Previous studies have shown that forty to eighty percent of children on the autism spectrum have some form of sleep disturbance, which creates severe challenges for the children and their families. Determining the causes of these sleep disturbances is a first critical step in finding out how to mitigate them, the researchers said in a statement.

Slow wave activity (SWA) is a measure of brain activity that is indicative of sleep quality, or so called sleep pressure. Normal sleep starts with periods of deep sleep that are characterized by high amplitude slow wave activity. A wave with a high amplitude is particularly tall, and a wave with a low amplitude is particularly short.

The researchers set out to find whether slow wave activity differs in children with autism spectrum disorder (ASD). Their findings showed that that was indeed the case: The wave pattern differs in children with ASD, the study said.

A team led by Prof. Ilan Dinstein, head of the National Autism Research Center of Israel and a member of Ben-Gurion University’s Department of Psychology, examined the brain activity of 29 children with autism and compared them to 23 children without autism. The children’s brain activity was recorded as they slept during an entire night in the Sleep Lab at Soroka University Medical Center, managed by Prof. Ariel Tarasiuk.

Recordings taken during the experiment revealed that the brain waves of children with autism are, on average, 25% weaker — or “shallower” — than those of typically developing children.

Children with ASD “exhibited significantly weaker SWA power, shallower SWA slopes, and a decreased proportion of slow wave sleep in comparison to controls,” the researchers said in their study. “This difference was largest during the first two hours following sleep onset and decreased gradually thereafter.”

The researchers “found a clear relationship between the severity of sleep disturbances as reported by the parents and the reduction in sleep depth,” said Prof. Dinstein in a statement. “Children with more serious sleep issues showed brain activity that indicated more shallow and superficial sleep.”

This could be because children with autism, and especially those whose parents reported serious sleep issues, “do not tire themselves out enough during the day, do not develop enough pressure to sleep, and do not sleep as deeply,” Dinstein said.

Now that the team has identified the potential physiology underlying these sleep difficulties, they are planning several follow-up studies to discover ways to generate deeper sleep and larger brain waves, from increasing physical activities during the day to behavioral therapies, and pharmacological alternatives such as medical cannabis, the statement said.

https://sdtimes.com/ai/continuous-deployment-for-ml-the-new-software-development-life-cycle/

Continuous Deployment for ML: The new software development life cycle

The new software development life cycle means working out ways to adapt the SDLC for your machine learning workflow and teams. With data scientists currently spending large chunks of their time on infrastructure and process instead of building models, finding ways to enable the SDLC to work effectively with machine learning is critical for not only the productivity (and job satisfaction) of your data scientists, but your entire development team.

But this comes with challenges. ML introduces patterns and technology issues that are not addressed by SDLC. To manage this, we need to both adapt the SDLC and address cultural differences between data scientists and other developers.

It’s important to remember that the field of ML is still developing and, therefore, non-uniform. Data science is more of an art than it is a standard software development and very much a research-based task. Conversely, standard software developers tend to adapt their techniques to the job in hand and conform to their environment. For example, learning another language when they get a new job because that’s the language used by most of the architecture in-house. ML tasks, on the other hand, are often specific to a language or a set of frameworks, so they use whatever’s best for the job, making for a much more heterogeneous environment.

RELATED CONTENT: Implications and practical applications for AI and ML in embedded systems

Take a model where ML developers might use Python NLTK at a particular version level, yet for other tasks they use R with TensorFlow with GPU acceleration, and so on. This means a model that needs to go into production when most standard serving software doesn’t run R at all — it’s a language that the DevOps people have never encountered, so they need a way to adapt their serving workflow to accommodate these more heterogeneous environments.

Another area where data scientists and DevOps teams don’t align is monitoring and optimization. In the ML world, testing tends to only be in the process of developing the model, not once it’s in production on a server somewhere. For a standard developer, however, they’re thinking not only about whether they have made the initial component right, but also whether by the time the world is using it, can it be continuously verified and still give the expected results?

A second issue here is DevOps people not knowing how to monitor a model. They’re not used to considering model drift or probabilistic results, so they might test an ML model and find the results slightly different each time, which may lead them to think the model is failing. A data scientist would know, however, that there needs to be a 10 percent allowance in the results.

Predictability is also a challenge. The SDLC functions with predictable, scheduled releases, whereas data science cycles are erratic and unpredictable — where software developers tend to set up in two-week schedules and plan out their work ahead of time, researchers tend to work in very abstract timelines where something might take a day or two months.

Cloud environments are another area for consideration. For developers, who are primarily writing code, there are a lot of adjuncts that happen — you need to be able to set up a server and set up and connect to a database, and these are usually managed in a cloud infrastructure. But data scientists aren’t used to that sort of workflow; they tend to have everything self-contained on their laptops or perhaps via a managed service. They’re also used to training and testing in self-managed environments, and have very likely not worked with DevOps before. It’s a considerable learning curve for them, and often a confusing one that involves unfamiliar jargon they have to decipher in order to communicate with IT staff about their work.

On the flip side, DevOps teams are simply not used to considering ML-specific needs or allowing for nonstandard deployments. Plus they expect people who are writing code — data scientists — to know how to configure a server properly or how to configure authentication properly. But those expectations are unmet; the IT side of the house will send over something that to them seems obvious, but the ML side of the house may be confused by it.

These are important challenges, but there are ways to manage them. Utilizing tools to create isolation layers can make this whole process easier. Rather than trying to take your ML model and drop it into whatever already exists for IT infrastructure even when it doesn’t fit, consider a tool that can help you create an interface that requires little adaptation of either side of the puzzle. For developers, rather than having to incorporate different code into the code base, they can direct their code at the tool and it will pass through and work directly there. For the ML team, it can containerize what they’re doing without requiring them to learn a heavy set of new tools.

To manage cultural differences, have the two teams take some time to understand what each other does and become more adaptive in their actions. Expect the data science team to have its own workflow, and accommodate that, but create a defined interface between the two teams and allow each team to use the tools and methodologies that work best internally to maximize their individual productivity.

Ultimately, don’t be limited or constrained by what you perceive the SDLC to be — simply adapt it to fit. Allow teams independence and flexibility so they can be as productive as possible, and adopt tools and techniques that will enable this.

 

About Diego Oppenheimer

Diego Oppenheimer is CEO of Algorithmia

https://medicalxpress.com/news/2019-12-molecular-brain-decision-making-area.html

A molecular map of the brain’s decision-making area

A molecular map of the brain's decision-making area
From left: Konstantinos Meletis, Antje Märtin, Daniela Calvigioni and Rania Tzortzi, researchers at the Department of Neuroscience at Karolinska Institutet. Credit: Juan Perez Fernandez

Researchers at Karolinska Institutet have come one step closer toward understanding how the part of our brain that is central for decision-making and the development of addiction is organized on a molecular level. In mouse models and with methods used for mapping cell types and brain tissue, the researchers were able to visualize the organization of different opioid-islands in striatum. Their spatiomolecular map, published in the journal Cell Reports, may further our understanding of the brain’s reward-system.

Striatum is the inner part of the brain that among other things regulates rewards, motivation, impulses and motor function. It is considered central to decision-making and the development of various addictions.

In this study, the researchers created a molecular 3-D-map of the nerve cells targeted by opioids, such as morphine and heroin, and showed how they are organized in . It is an important step toward understanding how the brain’s network governing motivation and drug addiction is organized. In the study, the researchers described a spatiomolecular code that can be used to divide striatum into different subregions.

“Our map forms the basis for a new understanding of the brain’s probably most important network for decision-making,” says Konstantinos Meletis, associate professor at the Department of Neuroscience at Karolinska Institutet and the study’s main author. “It may contribute to an increased understanding of both normal reward processes and the effects of various addictive substances on this network.”

To find this molecular code, the researchers used single-nucleus RNA sequencing, a method to study small differences in individual cells, and mapping of the striatal gene expression. The results provide the first demonstration of molecular codes that divide the striatum into three main levels of classification: a spatial, a patch-matrix and a cell-type specific organization.

“With this  we may now begin to analyze the function of different types of nerve  in different molecularly defined areas,” says Meletis. “This is the first step in directly defining the networks’ role in controlling decision-making and addiction with the help of optogenetics.”

This new knowledge may also form the basis for the development of new treatments based on a mechanistic understanding of the brain’s network, according to the researchers.


Explore further

Complex brain circuitry revealed using new single-cell sequencing technology


More information: “A spatiomolecular map of the striatum,” Antje Märtin, Daniela Calvigioni, Ourania Tzortzi, Janos Fuzik, Emil Wärnberg, Konstantinos Meletis, Cell Reports, online Dec. 24, 2019, DOI: 10.1016/j.celrep.2019.11.096

Journal information: Cell Reports

https://www.teslarati.com/tesla-holiday-update-fsd-preview-neural-net-improvements/

Tesla’s Neural Net can now identify red and green traffic lights, garbage cans, and detailed road markings

(CREDIT: @STEVEHAMEL16/TWITTER)


During Tesla’s Autonomy Day last April, Director of AI Andrej Karpathy joked that the attendees of the event only used a pair of cameras to navigate themselves to the venue. Emphasizing this point and Tesla’s aversion to LiDAR, the AI Director even joked that the attendees must not have shot lasers from their eyes when they were making their way to the event. These jokes, while lighthearted, show Tesla’s all-in bet on Elon Musk’s idea that a suite of cameras and a Neural Network are enough to teach a fleet of vehicles how to drive autonomously.

Unlike self-driving companies such as Waymo and Cruise, Tesla is intent on not using LiDAR, a component that is pretty much ubiquitous among firms developing FSD technology. Musk has proven quite unforgiving for LiDAR, calling it a “fool’s errand” and “stupid” if used for cars. This is a lot coming from Musk, especially considering that SpaceX, the Tesla CEO’s private space company, uses LiDAR for its spacecraft. Musk explained that LiDAR makes sense in space, but it’s foolish to use in regular cars.

🚀 Don’t Bet Against Elon 🚀👩‍🚀🐉@SteveHamel16

Lane marking arrows and stop light

View image on TwitterView image on Twitter

🚀 Don’t Bet Against Elon 🚀👩‍🚀🐉@SteveHamel16

1. Navigate on autopilot is much more aggressive in getting out of the passing lane than it was in the previous version

2. I tried but still no toll booth support 😉

View image on Twitter

https://scitechdaily.com/quantum-computing-breakthrough-silicon-qubits-interact-at-long-distance/

Quantum Computing Breakthrough: Silicon Qubits Interact at Long-Distance

Quantum Device

Princeton scientists demonstrate that two silicon quantum bits can communicate across relatively long distances in a turning point for the technology.

Imagine a world where people could only talk to their next-door neighbor, and messages must be passed house to house to reach far destinations.

Until now, this has been the situation for the bits of hardware that make up a silicon quantum computer, a type of quantum computer with the potential to be cheaper and more versatile than today’s versions.

Now a team based at Princeton University has overcome this limitation and demonstrated that two quantum-computing components, known as silicon “spin” qubits, can interact even when spaced relatively far apart on a computer chip. The study was published today (December 25, 2019) in the journal Nature.

“The ability to transmit messages across this distance on a silicon chip unlocks new capabilities for our quantum hardware,” said Jason Petta, the Eugene Higgins Professor of Physics at Princeton and leader of the study. “The eventual goal is to have multiple quantum bits arranged in a two-dimensional grid that can perform even more complex calculations. The study should help in the long term to improve communication of qubits on a chip as well as from one chip to another.”

Quantum computers have the potential to tackle challenges beyond the capabilities of everyday computers, such as factoring large numbers. A quantum bit, or qubit, can process far more information than an everyday computer bit because, whereas each classical computer bit can have a value of 0 or 1, a quantum bit can represent a range of values between 0 and 1 simultaneously.

To realize quantum computing’s promise, these futuristic computers will require tens of thousands of qubits that can communicate with each other. Today’s prototype quantum computers from Google, IBM and other companies contain tens of qubits made from a technology involving superconducting circuits, but many technologists view silicon-based qubits as more promising in the long run.

Silicon spin qubits have several advantages over superconducting qubits. The silicon spin qubits retain their quantum state longer than competing qubit technologies. The widespread use of silicon for everyday computers means that silicon-based qubits could be manufactured at low cost.

The challenge stems in part from the fact that silicon spin qubits are made from single electrons and are extremely small.

“The wiring or ‘interconnects’ between multiple qubits is the biggest challenge towards a large scale quantum computer,” said James Clarke, director of quantum hardware at Intel, whose team is building silicon qubits using using Intel’s advanced manufacturing line, and who was not involved in the study. “Jason Petta’s team has done great work toward proving that spin qubits can be coupled at long distances.”

To accomplish this, the Princeton team connected the qubits via a “wire” that carries light in a manner analogous to the fiber optic wires that deliver internet signals to homes. In this case, however, the wire is actually a narrow cavity containing a single particle of light, or photon, that picks up the message from one qubit and transmits it to the next qubit.

The two qubits were located about half a centimeter, or about the length of a grain of rice, apart. To put that in perspective, if each qubit were the size of a house, the qubit would be able to send a message to another qubit located 750 miles away.

The key step forward was finding a way to get the qubits and the photon to speak the same language by tuning all three to vibrate at the same frequency. The team succeeded in tuning both qubits independently of each other while still coupling them to the photon. Previously the device’s architecture permitted coupling of only one qubit to the photon at a time.

“You have to balance the qubit energies on both sides of the chip with the photon energy to make all three elements talk to each other,” said Felix Borjans, a graduate student and first author on the study. “This was the really challenging part of the work.”

Each qubit is composed of a single electron trapped in a tiny chamber called a double quantum dot. Electrons possess a property known as spin, which can point up or down in a manner analogous to a compass needle that points north or south. By zapping the electron with a microwave field, the researchers can flip the spin up or down to assign the qubit a quantum state of 1 or 0.

“This is the first demonstration of entangling electron spins in silicon separated by distances much larger than the devices housing those spins,” said Thaddeus Ladd, senior scientist at HRL Laboratories and a collaborator on the project. “Not too long ago, there was doubt as to whether this was possible, due to the conflicting requirements of coupling spins to microwaves and avoiding the effects of noisy charges moving in silicon-based devices. This is an important proof-of-possibility for silicon qubits because it adds substantial flexibility in how to wire those qubits and how to lay them out geometrically in future silicon-based ‘quantum microchips.’”

The communication between two distant silicon-based qubits devices builds on previous work by the Petta research team. In a 2010 paper in the journal Science, the team showed it is possible to trap single electrons in quantum wells. In the journal Nature in 2012, the team reported the transfer of quantum information from electron spins in nanowires to microwave-frequency photons, and in 2016 in Science they demonstrated the ability to transmit information from a silicon-based charge qubit to a photon. They demonstrated nearest-neighbor trading of information in qubits in 2017 in Science. And the team showed in 2018 in Nature that a silicon spin qubit could exchange information with a photon.

Jelena Vuckovic, professor of electrical engineering and the Jensen Huang Professor in Global Leadership at Stanford University, who was not involved in the study, commented: “Demonstration of long-range interactions between qubits is crucial for further development of quantum technologies such as modular quantum computers and quantum networks. This exciting result from Jason Petta’s team is an important milestone towards this goal, as it demonstrates non-local interaction between two electron spins separated by more than 4 millimeters, mediated by a microwave photon. Moreover, to build this quantum circuit, the team employed silicon and germanium – materials heavily used in the semiconductor industry.”

###

Reference: “Resonant microwave-mediated interactions between distant electron spins” by F. Borjans, X. G. Croot, X. Mi, M. J. Gullans and J. R. Petta, 25 December 2019, Nature.
DOI: 10.1038/s41586-019-1867-y

In addition to Borjans and Petta, the following contributed to the study: Xanthe Croot, a Dicke postdoctoral fellow; associate research scholar Michael Gullans; and Xiao Mi, who earned his Ph.D. at Princeton in Petta’s group and is now a research scientist at Google.

The study was funded by Army Research Office (grant W911NF-15-1-0149) and the Gordon and Betty Moore Foundation’s EPiQS Initiative (grant GBMF4535).

https://insideevs.com/news/389721/video-rivian-r1t-r1s-tank-turn/

Watch Rivian R1T Electric Pickup Truck Pull Off A True Tank Turn

Turning on a dime has a whole new meaning now.

Rivian just released a video showing its R1T electric pickup performing a spectacular tank turn. The truck literally spins circles in place. Watch this.

Rivian@Rivian

Tank Turn. Available on the R1T and R1S 🙂

Embedded video

1,336 people are talking about this

Rivian says that both the R1T and R1S SUV are tank-turn capable, but we suggest leaving this stunt to the professionals, as it’s likely quite rough on the vehicle and it seems there’s a decent chance that an unskilled tank-turners could quickly lose control of the truck/SUV.

We’ve seen the R1T do a tank turn before, but the video was quickly removed from YouTube. However, we do know that Rivian has trademarked the term “Tank Steer” which makes an appearance in this video. It has also trademarked the similar term “Tank Turn.” This is further evidence that turning on a dime (or close to it) will be a feature at least some Rivian vehicles will have.

Since we know Rivian Automotive’s vehicles have incredible torque, gobs of power, otherworldly off-road capabilities, and highly advanced all-wheel-drive systems and state-of-the-art traction control, this type of maneuver is definitely possible, though again we don’t advise trying it out without at least some instructions on how such a stunt is performed.

The Rivian R1T electric pickup truck is scheduled for the first deliveries late next year. Meanwhile, the R1S electric SUV should follow in 2021.

https://scitechdaily.com/human-brain-like-functions-emerge-in-neuromorphic-metallic-nanowire-network/

Human Brain-Like Functions Emerge in Neuromorphic Metallic Nanowire Network

Artists Concept Nanowire Network Brain

Emerging fluctuation-based functionalities are expected to open a way to novel memory device technology.

An international joint research team led by NIMS succeeded in fabricating a neuromorphic network composed of numerous metallic nanowires. Using this network, the team was able to generate electrical characteristics similar to those associated with higher-order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert and returning to calm. The team then clarified the mechanisms that induced these electrical characteristics.

The development of artificial intelligence (AI) techniques has been rapidly advancing in recent years and has begun impacting our lives in various ways. Although AI processes information in a manner similar to the human brain, the mechanisms by which human brains operate are still largely unknown. Fundamental brain components, such as neurons and the junctions between them (synapses), have been studied in detail. However, many questions concerning the brain as a collective whole need to be answered. For example, we still do not fully understand how the brain performs such functions as memorization, learning, and forgetting, and how the brain becomes alert and returns to calm. In addition, live brains are difficult to manipulate in experimental research. For these reasons, the brain remains a “mysterious organ.” A different approach to brain research—in which materials and systems capable of performing brain-like functions are created and their mechanisms are investigated—may be effective in identifying new applications of brain-like information processing and advancing brain science.

Neuromorphic Network

The joint research team recently built a complex brain-like network by integrating numerous silver (Ag) nanowires coated with a polymer (PVP) insulating layer approximately 1 nanometer in thickness. A junction between two nanowires forms a variable resistive element (i.e., a synaptic element) that behaves like a neuronal synapse. This nanowire network, which contains a large number of intricately interacting synaptic elements, forms a “neuromorphic network.” When a voltage was applied to the neuromorphic network, it appeared to “struggle” to find optimal current pathways (i.e., the most electrically efficient pathways). The research team measured the processes of current pathway formation, retention and deactivation while electric current was flowing through the network and found that these processes always fluctuate as they progress, similar to the human brain’s memorization, learning, and forgetting processes. The observed temporal fluctuations also resemble the processes by which the brain becomes alert or returns to calm. Brain-like functions simulated by the neuromorphic network were found to occur as the huge number of synaptic elements in the network collectively work to optimize current transport, in the other words, as a result of self-organized and emerging dynamic processes.

The research team is currently developing a brain-like memory device using the neuromorphic network material. The team intends to design the memory device to operate using fundamentally different principles than those used in current computers. For example, while computers are currently designed to spend as much time and electricity as necessary in pursuit of absolutely optimum solutions, the new memory device is intended to make a quick decision within particular limits even though the solution generated may not be absolutely optimum. The team also hopes that this research will facilitate understanding of the brain’s information processing mechanisms.

This project was carried out by an international joint research team led by Tomonobu Nakayama (Deputy Director, International Center for Materials Nanoarchitectonics (WPI-MANA), NIMS), Adrian Diaz Alvarez (Postdoctoral Researcher, WPI-MANA, NIMS), Zdenka Kuncic (Professor, School of Physics, University of Sydney, Australia) and James K. Gimzewski (Professor, California NanoSystems Institute, University of California Los Angeles, USA).
This research was published in Scientific Reports, an open access journal, on October 17, 2019.

Reference: “Emergent dynamics of neuromorphic nanowire networks” by Adrian Diaz-Alvarez, Rintaro Higuchi, Paula Sanz-Leon, Ido Marcus, Yoshitaka Shingaya, Adam Z. Stieg, James K. Gimzewski, Zdenka Kuncic and Tomonobu Nakayama, 17 October 2019, Scientific Reports.
DOI: 10.1038/s41598-019-51330-6

https://www.teslarati.com/tesla-cybertruck-christmas-sighting/

Tesla Cybertruck welcomed home by next-gen Roadster as it returns to Design Center

THE TESLA CYBERTRUCK HEADS TO THE DESIGN CENTER. (CREDIT: CHARLES R JONES II/FACEBOOK)


A recent sighting of the Tesla Cybertruck has shown that even the electric car maker’s most daunting vehicle comes home for the holidays. Recorded in Hawthorne, CA, the Cybertruck sighting depicts the massive all-electric vehicle returning to its home at Tesla’s Design Center, joining what appeared to be a 1:1 Cybertruck model and the next-generation Roadster.

Footage of the heavyweight zero-emissions beast was initially posted on Facebook by Tesla enthusiast Charles R Jones II, and later shared on Twitter by Tesla Owners of San Joaquin Valley founder @FamilyFirstJ. The clip was brief, and a look at the license plate of the Cybertruck reveals that it was the same prototype driven by Elon Musk during his dinner with friends at an upscale Malibu restaurant.

FamilyFirstJ@FamilyFirstJ

Saw this on FB the @Tesla driving into thre design studio.

Embedded video

If you’re shopping for a new vehicle, it’s good to know the key differences between these two models.

 

Similar to Elon Musk’s night out with the vehicle, the Cybertruck in this recent sighting seemed to sport some slight modifications on its headlights. The vehicle featured what appeared to be a solid, uniform strip of light during its unveiling event. But since being sighted on the road with Elon Musk, the Cybertruck’s headlights seemed to have been adjusted to have a more traditional look, with its left and right sides being brighter than the middle.

The Tesla Cybertruck may still be a couple of years away from production, but interest in the vehicle has remained high since its eventful unveiling. The vehicle has garnered hundreds of thousands of reservations, with Elon Musk’s most recent update hinting that 250,000 Cybertrucks have been reserved. That number was mentioned just days following the Cybertruck’s unveiling. With this in mind, Cybertruck reservations today are likely more.

The Cybertruck is Tesla’s most unique vehicle to date, utilizing a stainless steel “Exoskeleton” instead of traditional stamped body panels. This gives the Cybertruck its angular, XY design. It also allows the electric car maker to produce the all-electric pickup at a lower cost, since the Exoskeleton does not require the use of a stamping press and a paint shop. The Cybertruck’s design helps Tesla achieve its aggressive pricing for the vehicle as well, which starts at less than $40,000.

The Tesla Cybertruck will be offered in three variants, an RWD configuration that starts at $39,900, a mid-tier Dual Motor AWD variant that costs $49,900 before options, and a range-topping tri-motor AWD version that starts at $69,900 before options. All of these vehicles are equipped with basic Autopilot a standard, as well as useful features like 110v/220v onboard outlets and a payload capacity of 3,500 lbs. Off-road performance also appears to be in the bag, thanks to the Cybertruck’s 35-degree approach angle, up to 16″ clearance, and 28-degree departure angle.

https://www.sciencedaily.com/releases/2019/12/191223122915.htm

For CRISPR, tweaking DNA fragments before inserting yields highest efficiency rates yet

Date:
December 23, 2019
Source:
University of Illinois at Urbana-Champaign, News Bureau
Summary:
Researchers have now achieved the highest reported rates of inserting genes into human cells with the CRISPR-Cas9 gene-editing system, a necessary step for harnessing CRISPR for clinical gene-therapy applications. By chemically tweaking the ends of the DNA to be inserted, the new technique is up to five times more efficient than current approaches.

University of Illinois researchers achieved the highest reported rates of inserting genes into human cells with the CRISPR-Cas9 gene-editing system, a necessary step for harnessing CRISPR for clinical gene-therapy applications.

By chemically tweaking the ends of the DNA to be inserted, the new technique is up to five times more efficient than current approaches. The researchers saw improvements at various genetic locations tested in a human kidney cell line, even seeing 65% insertion at one site where the previous high had been 15%.

Led by chemical and biomolecular engineering professor Huimin Zhao, the researchers published their work in the journal Nature Chemical Biology.

Researchers have found CRISPR to be an efficient tool to turn off, or “knock out,” a gene. However, in human cells, it has not been a very efficient way to insert or “knock in” a gene.

“A good knock-in method is important for both gene-therapy applications and for basic biological research to study gene function,” said Zhao, who leads the biosystems design theme at the Carl R. Woese Institute for Genomic Biology at Illinois. “With a knock-in method, we can add a label to any gene, study its function and see how gene expression is affected by cancer or changes in chromosome structure. Or for gene-therapy applications, if someone has a disease caused by a missing gene, we want to be able to insert it.”

Searching for a way to increase efficiency, Zhao’s group looked at 13 different ways to modify the inserted DNA. They found that small changes to the very end of the DNA increased both the speed and efficiency of insertion.

Then, the researchers tested inserting end-modified DNA fragments of varying sizes at multiple points in the genome, using CRISPR-Cas9 to precisely target specific sites for insertion. They found efficiency improved two to five times, even when inserting larger DNA fragments — the most difficult insertion to make.

“We speculate that the efficiency improved so much because the chemical modification to the end stabilizes the DNA we are inserting,” Zhao said. “Normally, when you try to transfer DNA into the cell, it gets degraded by enzymes that eat away at it from the ends. We think our chemical addition protects the ends. More DNA is getting into the nucleus, and that DNA is more stable, so that’s why I think it has a higher chance to be integrated into the chromosome.”

Zhao’s group already is using the method to tag essential genes in gene function studies. They purposely used off-the-shelf chemicals to modify the DNA fragments so that other research teams could use the same method for their own genetic studies.

“We’ve developed quite a few knock-in methods in the past, but we never thought about just using chemicals to increase the stability of the DNA we want to insert,” Zhao said. “It’s a simple strategy, but it works.”


Story Source:

Materials provided by University of Illinois at Urbana-Champaign, News BureauNote: Content may be edited for style and length.


Journal Reference:

  1. Yi Yu, Yijun Guo, Qiqi Tian, Yuanqing Lan, Hugh Yeh, Meng Zhang, Ipek Tasan, Surbhi Jain, Huimin Zhao. An efficient gene knock-in strategy using 5′-modified double-stranded DNA donors with short homology armsNature Chemical Biology, 2019; DOI: 10.1038/s41589-019-0432-1

 

University of Illinois at Urbana-Champaign, News Bureau. “For CRISPR, tweaking DNA fragments before inserting yields highest efficiency rates yet.” ScienceDaily. ScienceDaily, 23 December 2019. <www.sciencedaily.com/releases/2019/12/191223122915.htm>.

https://phys.org/news/2019-12-large-area-flexible-near-infrared-light-emitting-diodes.html

Large-area and flexible near-infrared light-emitting diodes

Large-area and flexible near-infrared light-emitting diodes
Figure (A) shows uniform electroluminescence from a large-area flexible perovskite light-emitting diode developed by the research team. Figure (B) shows illumination with near-infrared perovskite light-emitting diode on the back of the fist. This allows for the imaging of subcutaneous blood vessels. Credit: Nature Photonics

Infrared LEDs are useful for optical communications and covert illumination, and are commonly found in remote controls and security camera setups. They are generally small point sources, which limits their use if larger-area illumination is required in close proximity, for instance, on a wearable device.

A research team led by Prof TAN Zhi Kuang from the Department of Chemistry and the Solar Energy Research Institute of Singapore (SERIS), NUS has developed high-efficiency, near-infrared LEDs that can cover an area of 900 mm2 using low-cost solution-processing methods. This is several orders of magnitude larger than the sizes achieved in other efforts, and opens up a range of interesting new applications. Their devices employ a novel perovskite-based semiconductor, which is a direct-bandgap semiconductor that is capable of strong  emission. By using a new device architecture, the research team is able to precisely tune the injection of electrons and holes (negative and positive charges) into the perovskite, such that a balanced number of opposite charges could meet and give rise to efficient light generation. The team also found that this improvement allowed large-area devices to be made with significantly higher reproducibility.

Mr ZHAO Xiaofei, a Ph.D. student on the research team said, “We found that the hole-injection efficiency is a significant factor that affects the performance of the devices. By using an organic semiconductor with a shallower ionization potential as part of the device structure, we were able to improve the hole injection and achieve charge balance. This allowed our devices to emit light at efficiencies (external quantum efficiency of 20%) close to their theoretical limit, and additionally reduced the device-to-device performance variation, hence enabling the realization of much larger devices.”

Prof Tan said, “Some of the technologies that our  could enable may include covert  in  or augmented reality/virtual reality eye-tracking technologies. In particular, we have demonstrated that our LEDs could be suited for applications involving subcutaneous deep-tissue illumination, such as in wearable health-tracking devices.”

“These materials could also be developed to emit light in the full range of visible colors. They could therefore be applied in newer generations of flat-panel electronic displays,” he added.


Explore further

New efficiency record set for perovskite LEDs


More information: Xiaofei Zhao et al. Large-area near-infrared perovskite light-emitting diodes, Nature Photonics (2019). DOI: 10.1038/s41566-019-0559-3

Journal information: Nature Photonics