http://www.kurzweilai.net/clearing-out-the-clutter-senolytic-drugs-improve-vascular-health-in-mice

Clearing out the clutter: ‘senolytic’ drugs improve vascular health in mice

Reduced calcification of plaques on blood-vessel walls
February 11, 2016

Mayo Clinic researchers have demonstrated the first study in which repeated treatments to remove senescent cells(cells that stop dividing due to age or stress) in mice improve age-related vascular conditions — and may possibly reduce cardiovascular disease and death.

The researchers intermittently gave the mice a cocktail of two senolytic drugs (ones that selectively induce cell death):dasatinib (a cancer drug, trade name Sprycel) and quercetin*. The drugs cleared (killed off) senescent cells in naturally aged and atherosclerotic mice. The treatment did not reduce the size of plaques in mice with high cholesterol, but did reduce calcification of existing plaques on the interior of vessel walls.**

The findings appear online (open access) in Aging Cell.

“Our finding that senolytic drugs can reduce cardiovascular calcification is very exciting, since blood vessels with calcified plaques are notoriously difficult to reduce in size, and patients with heart-valve calcification currently do not have any treatment options other than surgery,” says Jordan Miller, Ph.D., Mayo cardiovascular surgery researcher and senior author of the paper.

“While more research is needed, our findings are encouraging that one day removal of senescent cells in humans may be used as a complementary therapy along with traditional management of risk factors to reduce surgery, disability, or death resulting from cardiovascular disease.”

The coauthors include two researchers from Newcastle University. The research was supported by the National Institutes of Health, Mayo Clinic Center for Regenerative Medicine, and the Connor Group and Noaber Foundation. Drs. Kirkland, Tchkonia, Zhu, Pirtskhalava, and Ms. Palmer have a financial interest related to the research.

* Quercetin is found in many fruits, vegetables, leaves and grains. It can be used as an ingredient in supplements, beverages, or foods. — Wikipedia

** Prior studies at Mayo showed chronic removal of the cells from genetically-altered mice can alter or delay many of these conditions, and short-term treatment with drugs that remove senescent cells can improve the function of the endothelial cells that line the blood vessels. This study, however, looked at the structural and functional impacts of cell clearance using a unique combination of drugs on blood vessels over time. Mice were 24 months old when the drugs were administered orally over a three-month period following those initial two years. A separate set of mice with high cholesterol was allowed to develop atherosclerotic plaques for 4 months and were then treated with the drug cocktail for two months. — Mayo Clinic


Abstract of Chronic senolytic treatment alleviates established vasomotor dysfunction in aged or atherosclerotic mice

Rationale: While reports suggest a single dose of senolytics may improve vasomotor function, the structural and functional impact of long-term senolytic treatment is unknown.

Objective: To determine whether long-term senolytic treatment improves vasomotor function, vascular stiffness, and intimal plaque size and composition in aged or hypercholesterolemic mice with established disease.

Methods and Results: Senolytic treatment (intermittent treatment with Dasatinib + Quercetin via oral gavage) resulted in significant reductions in senescent cell markers (TAF+ cells) in the medial layer of aorta from aged and hypercholesterolemic mice, but not in intimal atherosclerotic plaques. While senolytic treatment significantly improved vasomotor function (isolated organ chamber baths) in both groups of mice, this was due to increases in nitric oxide bioavailability in aged mice and increases in sensitivity to NO donors in hypercholesterolemic mice. Genetic clearance of senescent cells in aged normocholesterolemic INK-ATTAC mice phenocopied changes elicited by D+Q. Senolytics tended to reduce aortic calcification (alizarin red) and osteogenic signaling (qRT-PCR, immunohistochemistry) in aged mice, but both were significantly reduced by senolytic treatment in hypercholesterolemic mice. Intimal plaque fibrosis (picrosirius red) was not changed appreciably by chronic senolytic treatment.

Conclusions: This is the first study to demonstrate that chronic clearance of senescent cells improves established vascular phenotypes associated with aging and chronic hypercholesterolemia, and may be a viable therapeutic intervention to reduce morbidity and mortality from cardiovascular diseases.

http://evolllution.com/technology/tech-tools-and-resources/how-technology-makes-a-difference-in-scaling-personalization/

How Technology Makes a Difference in Scaling Personalization

The EvoLLLution | How Technology Makes a Difference in Scaling Personalization
Higher education institutions need to leverage technological tools and systems to deliver the personalized experience today’s students expect.

Personalization of service and experience is an expectation of today’s consumers, but higher education institutions have been slow in adapting to this reality. However, as students begin to act more and more like customers, colleges and universities need to start delivering the personalized experience students expect. This can be a challenge, given the large and diverse number of students enrolling every year, but technology helps. In this interview, Lige Hensley reflects on the importance of delivering personalization at scale in today’s postsecondary environment and shares his thoughts on the role technology plays in helping institutions do this successfully.

The EvoLLLution (Evo): Why is it so important for colleges and universities to deliver students a personalized experience?

Lige Hensley (LH): Personalization is important in any industry because customers today like to feel like they are more than a number.

In education, I think this is even more important. At its core, a person’s postsecondary education is very much a personal experience. Higher education leaders know this and even publicize it when touting their faculty-to-student ratios. At our school, we’re doing a lot of work across the college to bring more personalization to our students. From our deployment of “one stop” registration centers to our use of online tutoring tools—not to mention a few of the projects we have in the works that haven’t been announced yet—we’re trying to cater to the needs of ours students. Giving the student what they need to be successful—sometimes before the students themself realize the need—is the ultimate personalized experience.

Evo: What are some of the key challenges to delivering a personalized experience at scale?

LH: There are two key challenges I see for us when it comes to scaling this level of personalization.

First, it’s critical to know where to personalize. We have a large and complex environment with lots of touch points with the students. We also have limited resources at our disposal. Knowing where and when to personalize in order to get the maximum benefit from the effort is key. While I don’t think we’ve mastered this just yet, we’re making real progress. We’ve started gathering data on college initiatives in order to verify they have the impact we expect. Our approach on the technology side is to literally keep every bit of data for future analysis. This has allowed us to gain real business knowledge from data (or the lack of data) that we would not have been able to do even five years ago. It also allows us to play “what if” scenarios. We can build behavior models from our data and then simulate changes to get an idea of the impact an initiative may have.

The second challenge, related to the above point, is getting the right people together to analyze the data we have gathered. We gather approximately 100 million rows of data, across hundreds of different systems, every day. It’s unreasonable to expect one or two people to have the right knowledge across all this information in order to analyze it. You have to get the correct mix of knowledge and insight together—along with a significant technical skills and technology—in order to get value from the data you have.

Evo: How important is technology to the effort of delivering personalization at scale?

LH: Technology is fundamental to delivering students a personalized experience at scale. The retail industry figured this out a long time ago. They can process a truly staggering amount of data quickly and get the right advertisement in the right hands. There are plenty of case studies on this and it’s clear that this does work. We’ve taken a lot of lessons that have been learned from other industries and brought them in-house to assist with our efforts. The only way to sift through over a billion rows of data every 10 days is to use technology efficiently and focus on those results that matter.

Technology even has a big impact in some areas that are less obvious. Whether it’s “smart” digital signs, systems that find and bring to bear the most efficient queuing algorithm, or identifying students who haven’t registered for the right classes in order to graduate on time, the possibilities are really endless.

It’s all a matter of picking the right effort, the right technology and sometimes thinking outside the box a bit.

Evo: What are the benefits and drawbacks of working with vendors on this?

LH: There are some clear benefits of working with vendors to achieve scale. Vendors can bring in new approaches, new technology or new techniques to the table. Good vendors can be the honeybees for good ideas. We collaborate with our good vendors nearly every day.

Unfortunately, the higher education industry also has more than its share of vendors that underperform. I think many schools settle with underperforming vendors because of a perceived lack of options, convenience or complacency. We challenge all of our vendors (and I’m willing to bet that if they are reading this, those folks are nodding their heads in agreement with this!). We are always looking for better ideas and new ways to improve or make things cheaper. One of our mottos within the technology team is “Bigger, Better, Faster.” Not only are we here to make things better for the college (and ultimately the student) but we require the same thing from our vendors, or we will find new ones.

http://blog.wolfram.com/2016/02/11/on-the-detection-of-gravitational-waves-by-ligo/

Jason Grigsby

On the Detection of Gravitational Waves by LIGO

February 11, 2016 — Jason Grigsby, Software Engineer, Software Engineering

Earlier today at a press conference held at the National Science Foundation headquarters in Washington, DC, it was announced that the Laser Interferometer Gravitational-Wave Observatory (LIGO) confirmed the first detection of a gravitational wave. The image reproduced below shows the signal read off from the Hanford, Washington, LIGO installation. The same signal could be seen in the data from the Livingston, Louisiana, site as well. While this signal may not seem like much, it is one of the most important scientific discoveries of our lifetime.

The gravitational wave event GW150914 observed by LIGO Hanford

B. P. Abbott et al., Phys. Rev. Lett. 116, 061102 (2016)


A hundred years ago, Einstein’s theory of general relativity predicted the existence of gravitational waves—little ripples in spacetime that carry energy and information. But it has taken a century of technological progress to provide us the practical means to confirm the theory. LIGO’s historic discovery has not just confirmed Einstein’s theory—it also provides us with a first peek into an entirely new way of conducting astronomy. So what are gravitational waves and how does LIGO measure them? To understand gravitational waves, let’s first take a look at waves we are all familiar with: the electromagnetic spectrum.

The electromagnetic spectrum

Astronomy has relied on various kinds of electromagnetic radiation—light, radio waves, X-rays, microwaves—to see into space and learn new things. Early in recorded history, people would watch the movement of the stars and planets at night. Much later, the first optical telescopes were invented and we could magnify images enough to see that the planets had their own moons. We started to build telescopes that could see into different regions of the electromagnetic spectrum, and we started learning more and more about stars, galaxies, pulsars, quasars, the distribution of dark matter, the expansion of the universe, and much more beyond. All this was done with the electromagnetic spectrum, one rainbow of waves that we have carefully searched for new information. The iconic Hubble Space Telescope is a shining example of how much we have learned by improving how we observe in the electromagnetic spectrum:

Hubble Space Telescope

Today, we have unlocked access to a completely different spectrum, one dependent on the force of gravity rather than the electromagnetic force, that can provide us a new window into the universe. The key to that access is LIGO, with its two installations at Livingston, Louisiana, and Hanford, Washington.

Laser Interferometer Gravitational-Wave Observatory in Livingston, LA, and Hanford, WA

If these locations seem isolated, that’s on purpose. It turns out that you can “listen” to a great many things with a sensitive-enough interferometer. But many forms of vibration in the Earth show up as noise in the data that is produced. In particular, when I had conversations with people who worked at LIGO, they seemed very frustrated at the regular timing of trucks entering and leaving a logging facility near the Livingston site. To try to reduce local sources of noise, the locations were picked to be as isolated as possible. This is LIGO’s Hanford, Washington, site:

LIGO's Hanford, WA, site
Courtesy Caltech/MIT/LIGO Laboratory

This does not look like a typical observatory. It looks far more like a particle accelerator, but by sending a split laser beam many times down the four-kilometer arms you see here, scientists have captured a change in the length of the arm that is equivalent to a fraction of the diameter of an atom, and thus detected gravitational waves. To understand how this works, it’s best to look into the nature of general relativity itself and see what gravitational waves actually are. Let’s start by posing a question for which general relativity gives the answer.

What would happen if the Sun disappeared?

What if the Sun magically disappeared? In this hypothetical case, I am talking about the Sun being replaced with empty space. This would change the gravitational field in the solar system dramatically. This might seem like a far cry from gravitational waves, but by looking at how different scientific theories treat this scenario, we can get to the motivation behind general relativity and gravitational waves.

First, let’s examine the behavior of light emitted by the Sun. We know that light travels at about 300 million meters per second, and the Earth is about 150 million kilometers from the Sun.

Behavior of light emitted by the Sun

By dividing the distance by the speed of light, we see that it takes a little over eight minutes for light to get from the Sun to the Earth.

Time it takes for light to reach the Earth from the Sun

That means that when the Sun disappears, there are eight minutes’ worth of light still streaming toward the Earth.

So it would take eight minutes after the disappearance of the Sun for the Earth to go dark. Now let’s see what happens with the Earth’s orbit. The Earth is orbiting the Sun and is bound to it through gravity. If the Sun suddenly ceased to exist, how soon would the path the Earth is traveling on change? Let’s look at Newton’s law of gravity, which governs how the Earth moves around the Sun:

Newton's Law of Gravity

In this treatment of gravity, there is no accounting for time. If the Sun were to suddenly disappear, one of the masses in the formula would go to zero, meaning the force would go to zero instantly, and the Earth would cease orbiting and shoot off into space.

Here’s an animation showing what would happen according to Newton’s law. The blue Earth is shown orbiting the Sun. Yellow circles are used to represent light being emitted by the Sun. When the Sun disappears, the light that was already emitted by the Sun is still hitting the Earth for several more minutes. However, according to Newton’s theory of gravity, the Earth instantly stops orbiting where the Sun was. So here, light waves take time to carry the information of the now missing Sun, but gravitationally, that information is available instantaneously.

Classic gravity model

At the time that Einstein was looking into related questions, this feature was unique to the theory of gravity. This leads to the question: why do changes in everything else take time to propagate from one location to another, but with gravity propagation is instantaneous? What makes gravity unique?

Einstein’s answer is that gravity is not unique, but the underlying theory needs to be changed. He postulated his theory of general relativity, where gravitational information also propagates at the speed of light via gravitational waves. We explore that in the next section on Einstein’s theory of general relativity.

Einstein’s theory of general relativity

Einstein’s theory of relativity has two parts:

  • Gravity is the effect of a curved spacetime on the motion of matter and energy.
  • The distribution of matter and energy impacts the shape of spacetime.

Let’s take a look at both points.

Gravity is curvature

The first point is about gravity not being thought of as a force, but the natural outcome of objects moving in a curved spacetime. Imagine a large ball with two ants sitting at the equator. The black line is the equator, the two red points represent the ants, and the arrows point in the direction the ants are headed. Note that the arrows are parallel to each other at this point:

Large ball with two "ants" sitting at the equator

If the two ants travel north on the ball, they are initially moving in parallel. But as they approach the North Pole, they converge. By the time they reach the top, the ants are in the same location.

The "ants" meeting at the top

This illustrates some of the ideas behind what general relativity refers to as “parallel transport”. If you think of north as the direction of time, you can see how curvature can bring two objects together. Similarly, the curvature of spacetime is what draws us to the Earth and keeps the Earth orbiting the Sun.

Matter determines the shape of spacetime

For the second point, imagine spacetime as a sheet of tightly stretched fabric.

Envisioning spacetime as a sheet of tightly stretched fabric

If you place a ball onto the middle of that sheet, the sheet will take time to deform and find a settled state. In this case, the ball represents the presence of matter and the sheet is spacetime being curved by the ball’s placement.

The consequences of general relativity have been confirmed repeatedly over the last hundred years. When general relativity came out, it explained a precession of Mercury’s orbit that could not previously be accounted for. In 1919 during a solar eclipse, Arthur Eddington measured how the Sun deflects the light of distant stars, a key prediction of general relativity. Still, until this announcement, there had been no direct confirmation of gravitational waves themselves.

Gravitational waves

To understand how LIGO detects gravitational waves, let’s step back and consider an example using Newtonian physics. In the graphic below, imagine the two red spheres are distant stars orbiting each other and the rabbit is an observer where you and I are. According to Newton’s theory of gravity, each distant star will pull on the rabbit, as shown by the blue arrows. The forces will sum to the red arrow, indicating the rabbit is drawn to the center of mass of the distant orbiting stars.

Rabbit being drawn to the center of mass of the distant orbiting stars

The above analysis is treating the rabbit as a point rather than an extended object. In reality, objects have height, depth, and width. When aligned as shown below, the top of the rabbit is pulled a little more toward the upper star and the bottom is pulled a little more toward the lower star. Thus a stretching will occur.

Rabbit being stretched

As the distant stars orbit each other, so too will the direction of the stretching change. When the two stars are aligned horizontally, the rabbit is stretched horizontally.

When the two stars are aligned horizontally, the rabbit is stretched horizontally

So as these stars orbit, the rabbit is stretched according to the orientation of the stars. In this Newtonian model, though, the stretching is perfectly aligned with the orientation of the stars, because according to Newtonian gravity, it takes zero time for a change in gravity to get to another location.

With general relativity, however, the motion of the stars puts ripples into spacetime itself, and those ripples take time to propagate out to the rabbit. Still, when the gravitational waves interact with an object, they have a similar stretching effect. This animation, based on a common LISAimage, illustrates the gravitational waves produced by two orbiting objects.

Modern gravity model

The source code for the above animation can be found here.

To get back to our previous question, “What if the Sun disappeared?”, under general relativity a gravitational wave would be generated that propagates the new information of the now empty space out to where the Earth is. In other words, the path of the Earth’s orbit would continue to circle the now missing Sun until the last rays of light got to the Earth and the gravitational wave with that new information had passed.

Modern gravity model

Detecting gravitational waves with LIGO

With our understanding that passing gravitational waves will stretch objects in a rotating manner, we now move on to detecting those gravitational waves. To do that, LIGO uses an interferometer. Below, I have provided an animation of an interferometer based on a Wolfram Demonstration.

Interferometer

On the left, a coherent light source sends a beam of light to a half-silvered mirror. The split beam then travels down two different arms and is reflected by mirrors at the end of each path. The beam is then recombined and sent to a detector. If the two paths are different in length, the two beams will be out of phase when they are recombined, decreasing the overall intensity of the beam. Thus you can measure a change in intensity of the beam to determine a change in distance. What you see at the Hanford LIGO site are four-kilometer arms of an interferometer. A beam is passed back and forth several times before being recombined to measure a change in the distance traveled smaller than the nucleus of an atom.

LIGO's Hanford, WA, site
Courtesy Caltech/MIT/LIGO Laboratory

Given the vanishingly small earthly effects of gravitational waves, it takes some of the most energetic events in the universe to generate gravitational waves that are detectable by LIGO. The most likely to be detected are generated by binary black holes with a total mass of about 10–100 times that of the Sun. Indeed, we heard at the LIGO press conference earlier today that the detected waves were from the merger of two black holes of approximately 65 solar masses total. During the course of spiraling together and merging, three solar masses’ worth of energy were radiated out in a fraction of a second. The actual merger of these two black holes happened approximately 1.3 billion light years away, meaning that these two black holes merged before multicellular life came about on Earth.

So a hundred years after Einstein formulated general relativity, one of its last fundamental predictions has been confirmed. However, as much as this detection is a success for LIGO, the LIGO Scientific Collaboration, and the physics community in general, it is not just a conclusion for theoretical physics. It’s the beginning of a new era in astronomy. As the tools and methods at LIGO improve, more information about sources of gravitational waves, their locations, and their physics will become available. Possible future projects such as the Laser Interferometer Space Antenna will extend the range of detection as well as the range of frequencies available for gravitational wave observations, possibly allowing us to see the results of mergers of supermassive black holes that occur when galaxies collide.

http://www.mcgill.ca/newsroom/channels/news/same-gene-can-encode-proteins-divergent-functions-258507

Same gene can encode proteins with divergent functions

News

Research may help explain why humans have fewer genes than expected
PUBLISHED: 11FEB2016

It’s not unusual for siblings to seem more dissimilar than similar: one becoming a florist, for example, another becoming a flutist, and another becoming a physicist.

Something of the same diversity applies to the “brood” of proteins produced from any single gene in human cells, a new study led by scientists at Dana-Farber Cancer Institute, University of California, San Diego School of Medicine, and McGill University has found. In a first large-scale systematic study, the researchers found that most sibling proteins – known as “protein isoforms” encoded by the same gene – often play radically different roles within tissues and cells, however alike they may be structurally.

The research, published online today by the journal Cell, stands to have a powerful effect on the understanding of human biology and the direction of future research. For one, it may help explain how the mere 20,000 protein-coding genes in the human genome – fewer than are found in the genome of a grape – can give rise to creatures of such enormous complexity. Scientists know that the number of different proteins in human cells, thought to be upwards of 100,000, far exceeds the number of genes, but many questions have remained. Do most of those proteins have a unique function in the cell, or do their roles sometimes overlap? The discovery that different protein isoforms encoded by the same gene may have divergent functions on a larger scale than realized suggests that they vastly multiply what our genes are capable of.

Diversity, clue to disease

This diversity also suggests that each protein isoform needs to be studied individually to understand its normal role and its potential involvement in disease, the study authors state.

“Research into cancer-related proteins, for example, often focuses on the most prevalent isoforms in a given cell, tissue, or organ,” said co-senior author David E. Hill, PhD, associate director of the Center for Cancer Systems Biology (CCSB) at Dana-Farber. “Since less-prevalent protein isoforms may also contribute to disease, and may prove to be valuable targets for drug therapy, their role should be examined as well; and to do that properly, we also need comprehensive clone collections covering all expressed isoforms.”

Previous functional studies of protein isoforms have generally been done on a gene-by-gene basis.  Furthermore, researchers frequently compared the activity of a gene’s “minor” isoforms to that of its predominant isoform in a particular tissue. The new study approached the functional question from a larger perspective – by gathering multiple protein isoforms of hundreds of genes and comparing how they specifically interact with any other human protein.

Alternative splicing

One of the ways that cells produce multiple protein isoforms from individual genes is a process called alternative splicing. Most human genes contain multiple segments called exons, separated by intervening non-coding sequences called introns. In the cell, different combinations of these individual exons are “glued” or spliced together to generate a final expressed gene product; thus, a single gene can encode a set of distinct, but related protein isoforms, depending on the specific exons that are spliced. One isoform, for example, may result from splicing exons A-B-C-D of a particular gene. Another may arise from the skipping of exon C, resulting in a product with only exons A-B-D.

For the new study, researchers devised a technique called “ORF-Seq” that allowed them to identify and clone large numbers of alternatively spliced gene products in the form of open reading frames (ORFs), and use them to produce multiple protein isoforms for hundreds of genes.

Of the roughly 20,000 genes in the human genome that code for proteins, researchers concentrated on about eight percent. Using ORF-Seq, they ultimately created a collection of 1,423 protein isoforms for 506 genes, of which more than 50 percent were entirely novel gene products. They subjected 1,035 of these protein isoforms through a mass screening test that paired them with 15,000 human proteins to see which would interact.

“The exciting discovery was that isoforms coming from the same gene often interacted with different protein partners,” remarked Gloria Sheynkman, PhD, of Dana-Farber and one of the lead authors. “This suggests that the isoforms play very different roles within the cell” – much as siblings with different careers often interact with different sets of friends and co-workers.

The researchers found that in most cases, related isoforms shared less than half of their protein partners. Sixteen percent of related isoforms share absolutely no protein partners. “From the perspective of all the protein interactions within a cell, related isoforms behave more like distinct proteins than minor variants of one another,” Tong Hao, of Dana-Farber and one of the lead authors, asserted.

Intriguingly, isoforms that stem from a minuscule difference in DNA – a difference of just one letter of the genetic code – sometimes had starkly different roles within the cell, researchers found. At the same time, related isoforms that are structurally quite different may have very similar roles.

Quite often, the interaction partners of related isoforms vary from tissue to tissue, the researchers found. In the liver, for example, an isoform may interact with one set of proteins. In the brain, a relative of that isoform may interact with a largely different set of protein partners.

“A more detailed view at protein interaction networks, as presented in our paper, is especially important in relation to human diseases,” said co-senior author Lilia Iakoucheva of UC San Diego. “Drastic differences in interaction partners among splicing isoforms strongly suggest that identification of the disease-relevant pathways at the gene level is not sufficient. This is because different variants could participate in different pathways leading to the same disease or even to different diseases. It’s time to take a deeper dive into the networks that we are building and analyzing.”

“Cells from different tissues in our body share the same genome,” said co-senior author Yu Xia of McGill University. “Yet their molecular wiring diagrams are far more divergent than previously thought. This information is crucial for understanding biology and combating disease.”

Widespread Expansion of Protein Interaction Capabilities by Alternative Splicing, Cell (2016), by Xinping Yang, Jasmin Coulombe-Huntington, Shuli Kang, …, Lilia M. Iakoucheva, Yu Xia, Marc Vidal,http://www.cell.com/cell/abstract/S0092-8674%2816%2930043-5

The work was supported by the National Human Genome Research Institute (grants P50HG004233 and U01HG001715); the Ellison Foundation; the National Cancer Institute (grant R33CA132073); the Krembil Foundation; a Canada Excellence Research Chair Award; an Ontario Research Fund-Research Excellence Award; the Eunice Kennedy Shriver National Institute of Child Health and Human Development (grant R01HD065288); the National Institute of Mental Health (grants R01MH091350, R01MH105524, and R21MH104766; the National Science Foundation (grant CCF-1219007); the Natural Sciences and Engineering Research Council of Canada (NSERC) (grant RGPIN-2014-03892), the Canada Foundation for Innovation (grant JELF-33732); the Canada Research Chairs Program; National Institutes of Health (training grant T32CA009361); an NSERC fellowship; the National Institute of General Medical Sciences (grant R01GM105431); and a Swedish Research Council International Postdoc Grant.

Dana-Farber Cancer Institute: www.dana-farber.org

McGill University: www.mcgill.ca

http://www.iclarified.com/53873/sleep-20-for-apple-watch-introduces-major-improvements-to-sleep-analysis-engine

Sleep++ 2.0 for Apple Watch Introduces Major Improvements to Sleep Analysis Engine

The Sleep++ app for Apple Watch has been updated with some major improvements to its sleep analysis engine.

Use your Apple Watch to track and measure your sleep! Sleep++ takes advantage of the motion tracking capabilities of your Apple Watch to closely measure both the duration and quality of your sleep. The better you understand how well you are sleeping the more you are able to make changes in your routines to benefit your rest.

How It Works:
1) Wear your Apple Watch while you sleep.
2) Tell Sleep++ when you start sleeping.
3) Tell Sleep++ when you wake up.

Sleep++ will then analyze your night’s sleep and give you a detailed breakdown of how well you slept and how restless you were. Optionally integrates with HealthKit providing you a safe and private way to share your sleep data with other health or fitness apps.

Sleep++ 2.0 for Apple Watch Introduces Major Improvements to Sleep Analysis EngineSleep++ 2.0 for Apple Watch Introduces Major Improvements to Sleep Analysis EngineSleep++ 2.0 for Apple Watch Introduces Major Improvements to Sleep Analysis Engine

What’s New In This Version:
Building on the initial success of Sleep++ I’m delighted to announce Sleep++ 2.0. This update is centered around major improvements to the sleep analysis engine, making it far better at helping you understand your sleep.

– The sleep analysis algorithm has been completely overhauled to allow for more fine grained sleep-type characterization. It can now accurately differentiate between deep sleep, light sleep, restlessness and wakefulness. These algorithmic improvements greatly increase the usefulness of the data collected.
– The HealthKit support for the app has been greatly improved beyond the the basic data previously collected. Detailed analysis of your night is now saved in Health for further use.
– A night detail screen now provides for more clear explanation of the quality of each night, telling you when you slept well and when you were restless.
– You can now trim nights from the detail screen, useful for when you forget to stop the sleep analysis when you wake up in the morning.
– The app now more fully supports time zone changes letting you get a more consistent view of your sleep as you travel.

You can download Sleep++ from the App Store for free.

http://techcrunch.com/2016/02/11/google-is-reportedly-building-a-standalone-vr-headset-not-powered-by-a-pc-or-smartphone/

Google Is Reportedly Building A Standalone VR Headset Not Powered By A PC Or Smartphone

Google may be preparing a consumer virtual reality headset for a release as early as this yearthat defies existing categorizations and doesn’t rely on a PC or mobile phone as the centralbrain, the WSJ reports.

Rumors have been bubbling up on the company’s VR hardware ambitions over the last fewweeks. The Financial Times reported a few days ago that Google would be releasing a mobile-based Samsung Gear VR competitor in the near-future, possibly at Google I/O in May.

The WSJ report today suggests that Google will be building this untethered headset utilizing“high-powered” chips from Movidius that will power the device and its associated head-tracking technology made possible by external cameras.

Interestingly, Movidius just announced a partnership a couple weeks ago involving its Myriad2 processing platform, detailing that the company was working with Google “to bring machineintelligence to devices.”

“The technological advances Google has made in machine intelligence and neural networksare astounding. The challenge in embedding this technology into consumer devices boilsdown to the need for extreme power efficiency, and this is where a deep synthesis betweenthe underlying hardware architecture and the neural compute comes in,” said Movidius CEORemi El-Ouazzane in the blog post from last month.

There have been other significant movements from Google in the past several months on theconsumer side of virtual reality, though most have been devoted to more broad platformslike Cardboard and Project Tango which give third-party VR and AR hardware manufacturersand content creators a system to build upon.

While Project Tango is still in its earlier stages with Lenovo currently building some of the firstTango devices to be released this summer, Google Cardboard is already strapped on thefaces of eager consumers who have ordered 5 million of the bare-bones devices.

These rumors also come in the wake of some interesting changes at the company over thepast several weeks. Clay Bavor, Google’s VP for Product Management, left his work on otherGoogle products to exclusively focus on managing the company’s virtual reality offerings.

Some interesting job postings on Google’s site also raised questions with postings detailing aneed for a VR Hardware Engineering Technical Lead Manager that would lead a team inbuilding “multiple” consumer electronic devices while also directing “system integrationof high-performance, battery powered, highly constrained consumer electronicsproducts.”

A battery-powered HMD device that isn’t attached to a PC or mobile phone would definitelybe a major development in an industry largely-dominated by developer andconsumer devices situated at either end.

For a company like Google with such clear ties to the mobile ecosystem, at first impressionthis feels like a bit of an odd move to me. Mobile VR offers major accessibility to users thatare already sporting well-powered smartphones and tethered VR offers an unparalleledexperience that prioritizes crazy resolutions and rapid frame rates.

We may just have to wait and see. While the WSJ reports that the device could be coming laterthis year, other sources told the paper that the development was in its early stages andGoogle could still choose not to release it.

http://www.theweek.co.uk/iphone-7/62138/iphone-7-concept-sketch-suggests-liquidmetal-handset-could-charge-wirelessly

iPhone 7: concept sketch suggests ‘liquidmetal’ handset could charge wirelessly

Feb 11, 2016

Designer imagines new phone with full-screen gaming, wireless charging and full water resistance

Page 1 of 16iPhone 7: concept sketch suggests ‘liquidmetal’ handset could charge wirelessly

Concept sketches for upcoming iPhones tend to come in two distinct flavours: those that try to imagine what the next generation will actually be and those that simply investigate what it could potentially be.

Designer Herman Haidin’s drawings definitely fall into the latter category.

In a series of sketches published on Behance.net, the Ukrainian imagines an iPhone constructed from “liquidmetal” – for which Apple acquired the patent in 2010 and could potentially make the handset completely waterproof and pave the way for wireless charging.

So what is liquidmetal? Many of us already own a piece of it – every iPhone box sold today comes with a small piece of the material in it: the small prong SIM ejector.

Liquidmetal is an alloy with “an amorphous atomic structure and a multi-component chemical composition”, tech site BGR says. “The special metal has high tensile strength, corrosion resistance, water resistance and better elasticity.”

In his concept sketch, Haidin imagines the iPhone 7 with a layer of liquidmetal incorporated just beneath the screen to act as the cooling system for the new handset and help its internal components stay dry.

High Tech news videos : Care and Cure: Engineering the future of diabetes treatment

His drawing proposes that the handset will also have a five-inch display but be only three millimetres thick, half that of the iPhone 6S. A little fanciful, but it is within the realms of possibility according to some leaks, which suggest Apple is pushing to make its next devices slimmer than ever.

The concept also adds wireless charging to the new phone and full-screen gaming, both of which would certainly be eye-catching features.

Previous rumours suggested Apple is interested in making the upcoming versions of its flagship handset waterproof and the liquidmetal design studies are not the only whispers so far suggesting an exotic material could be used to construct the chassis of the phone.

The design of the soon-to-be outgoing 6S – particularly the rear of the handset – has divided critics due to its protruding camera module and exposed antenna bands breaking up what would otherwise be a completely flush surface.

Previously, it was hinted that a new anodised metal could be used to make the phone in the future, meaning the antenna bands would no longer be exposed on the exterior casing. Now, though, a new report by Business Korea says Apple could be looking at incorporating ceramic materials into the design of the iPhone as part of its drive to remove the bands.

Alphr says the material would be used on the back of the device – the area in need of a tidy up. It notes that rival handsets, such as the OnePlus X, use ceramic materials to give the phone a “premium feel” and that using ceramics would be a good place to start if Apple wished to make its iPhone 7 more stylish than the 6 models. However, it says the likelihood of ceramic iPhones is “a mixed bag”, regardless of strong rumours indicating the handset will look different.

“While the source for the rumour seems to be nothing more than a prediction, it does raise an interesting possibility. Handsets such as the OnePlus X have shown that ceramic materials offer a higher level of premium feel – the sort of feel you’d associate with Apple products. If Apple wants to do something new and make the iPhone 7 even more stylish than its predecessor, giving it a ceramic finish would be a good place to start.”

More likely, says Alphr, are the rumours that the iPhone 7 will feature a waterproof design and ditch protruding cameras and the device’s rear antenna lines.

The idea that the iPhone might be waterproof gathered momentum after Apple filed patents for ports that can eject water.

Titled “Electronic Device With Hidden Connector”, the patent “shows a connector covered by a self-healing elastomer,” Alphr says. “Diagrams included in the patent show the elastomer allowing the penetration of a probe, and self-sealing once the probe is removed. The port is shielded from the elements at all times, but still allows quick and easy access for charging, headphones or anything else.”

 

iPhone 7: patents hint at touchless, button-free handset

5 February

iPhone users could someday be able to navigate and interact with their handsets without touching them.

Newly discovered patents filed by Apple show the company is interested in expanding its 3D touch features from the iPhone 6S to expand what can be done with its force-sensitive feature on future handsets.

The first patent describes a radical idea called “proximity and multi-touch sensor detection and demodulation” – essentially, technology to allow users to navigate their handsets without even touching them.

According to ValueWalk, the system would use photodiodes or other proximity-sensing hardware connected to the iPhone’s current force-sensitive 3D touch capabilities. This would allow users to control the handset merely by hovering their fingers over the display and pushing virtual buttons created by bouncing infrared emitted from LEDs back into the photodiodes. Apparently, it could improve the device’s battery life as well as saving space.

It’s an interesting feature, but ValueWalk says it most likely will not be coming on the iPhone 7 in September. Apple has yet to roll out comparatively simple 3D touch on many of its devices and according to AppleInsider, releasing hover-touch now could “muddy the waters”. The patent was only filed in March 2015, too.

The second user interaction-based patent outlines plans to introduce 3D touch to the home button. AppleInsider says this would work by arranging electrodes underneath the home button that would connect when pressed and introduce new options depending on the amount of pressure exerted.

“For example, an iPhone can be unlocked by a light touch with a registered finger, while a deep press unlocks the device and executes an operation like opening an app,” it says. “Contextual commands might also be mapped to distinct pressure levels, such as replying to a recently received message with a selection of intelligent responses”.

While most of the focus is on the functions such an addition could bring to the iPhone, TechnoBuffalo says it could have a massive impact on the handset’s design as well. Introducing a home button with force-sensitive touch could mean it no longer needs to be a physical feature and instead could be a flat surface with haptic feedback – curious, considering early iPhone 7 rumours hinted it may have an edge-to-edge display. Using 3D touch like this could mean the home button becomes a location on the screen rather than a physical element.

iPhone 7 leak suggests ‘thinnest, smoothest handset ever’

04 February

New details regarding the Apple iPhone 7 have reportedly been leaked.

According to Macrumours, a source close to Apple with a reliable track record of leaking accurate information has given the clearest indication yet of what to expect when the iPhone 7 is revealed in September.

According to the source, the chassis of the phone will be extremely similar to that of the iPhone 6, with two noticeable revisions.

The first change will be an alteration to the camera which will result in a slightly different look around the aperture. Apple is said to be interested in using a thinner and smaller camera module on the iPhone 7, and doing so would remove the bump the aperture currently creates on the body of the iPhone 6 and 6S, meaning the back of the phone would be smooth and flush.

Forbes says there are three more advantages to introducing the new, flush design. At the moment, when the iPhone is placed on a table, the protruding module becomes irritating as the bump makes the handset wobble. The change also means that such a delicate part will no longer be a continual point of impact when putting the phone down and that third-party case designs will be much simpler.

The other revision is said to be a change to how the antenna bands work on the handset. At the moment, the bands on the iPhone 6 models create white borders on the rear of the phone, meaning the top and bottom of the handset are divided awkwardly. Critics have called the current design unsightly – a criticism Apple may have taken to heart.

According to the source, the antenna bands will still remain exposed on the exterior of the phone, but the bar dividing the rear of the handset visually into three sections will be removed, making for a smooth, all-metal finish.

There have been whispers in the past that Apple is not satisfied with the design of the iPhone 6. Last year it was reported that the company had filed a patent for a new anodized metal design for future handsets which would allow wireless signals to be received strongly with a full metal case. iPhonehackstouted it as a possible addition to the iPhone 7.

The source did not add any information regarding the dimensions of the handset, but it is expected that the iPhone 7 will be slimmer than the iPhone 6 and 6S. Strong rumours have suggested that Apple will controversially remove the headphone jack from the device in order to reduce the thickness of the phone substantially.

Regarding the rumour that Apple will could introduce a ‘dual-camera’ equipped iPhone 7 Plus in September, Macrumours also has new information.

According to sources within the supply chain, Taiwanese, Japanese and Chinese camera manufacturers have sent Apple examples of potential iPhone dual camera setups for testing ahead of potential inclusion in the iPhone range.

It is not clear at this time how a change to the internals of the camera to make it flush with the bodywork will factor in to the dual-camera rumours.

In December, an episode of the US TV show 60 Minutes said the tech giant has more than 800 engineers working on the cameras and detailed the intricate process behind creating a smartphone camera for an Apple device. Around the same time, early rumours of the next iPhone coming with a dual-camera set-up were beginning to do the rounds on the internet. Since then, the reports have modified to accommodate the rumoured third iPhone 7.

 

http://www.zdnet.com/article/to-save-the-ipad-apple-needs-to-copy-microsoft/

To save the iPad, Apple needs to copy Microsoft

Apple needs to take a long, hard look at why Microsoft is managing to shift so many Surface tablets despite the iPad being a much stronger, better-known brand.

iPad Pro running OS X El Capitan (mockup)

Apple might be able to sell more iPad Pro tablets than Microsoft shifted Surface units — IDC estimates numbers in the region of 2 million for the iPad Pro compared to some 1.6 million for the Surface — but with iPad sales falling, Microsoft may hold the key to the future of the iPad.

While the iPad is in decline, Microsoft is seeing strong Surface revenue growth.

I remember when I was an enormous fan of the iPad. What wasn’t there to like? After all, it was a giant iPhone that brought with it the promise of so much innovation and reinvention. But a few years down the line and iPad sales are waning because the iPad concept has stagnated. It’s still little more than a giant iPhone. I gave up on mine after I switched to the iPhone 6 Plus because while the 5.5-inch display is a ways short of the 9.7-inches of the standard iPad, there just wasn’t anything that I wanted to do that I couldn’t now do on my iPhone.

Sure, the extra screen real estate was nice, but it meant carrying an iPhone and an iPad with me. But the more I used my iPhone, the less I felt the desire to carry the iPad, and pretty soon my iPad took permanent residence under my Mac’s keyboard (until a family member adopted it).

Why bother carrying two devices about — three if you include the keyboard — when one will do?

The iPad’s Achilles’ heel is the operating system and the fact that it’s restricted to running apps, most of which are just revamped version of iPhone apps.

The iPad was nice and exciting while it was new, but it never became an essential.

Now compare this to the Microsoft Surface. Here is a device that excites me, and not just because of the hardware (though I have to admit that the hardware is nice). What excites me the most about the Surface is its ability to run a full operating system, which in turn gives me the freedom and flexibility to run full applications such as Adobe Photoshop or Microsoft Office, as opposed to the watered-down versions available from the app store. At the same time, it gives me the option to run cut-down apps from the Windows app store if that’s what I want.

Which brings me to the direction that the iPad — or at least the iPad Pro — should go in, and that’s an OS X tablet.

And why not? After all, Apple has all the pieces in place to come out with an OS X tablet. Take one new MacBook, pry out the keyboard and duct tape the display to the body and there you have it, an OS X-powered MacBook tablet. OK, I’m sure Apple would do things a little more elegantly than that, but the innovation crammed inside Apple’s current MacBook looks eerily similar to that of a tablet.

“What about the keyboard?” I hear you ask. OS X has a built-in on-screen keyboard ready to use.

Want to add more ports? There’s a dongle for that!

Yes, OS X would need some work to make it touch-ready, but Apple has been working towards making OS X look and feel — and work — more like iOS during recent years.

The power and the battery life it would bring to the table would make it a killer device for BYOD and enterprise.

I know, I know, Tim Cook said that customers didn’t want a combined iPad and MacBook, but following last quarter’s financial results, it’s becoming clear that customers don’t really want the iPad as it is anymore either.

Now I understand why Apple likes the idea of its tablet running iOS as opposed to OS X — think app revenue — but that Microsoft is having so much success with the Surface is a clear indicator that when it comes to tablets, consumers seem to be gravitating towards a full-blown operating system.

I know that everyone’s mileage will vary, and I’m also well aware of the fact that there are a lot of people who feel that the iPad works for them just as it is. But it’s hard for me to ignore the facts, specifically that iPad sales are falling, and that the Surface, which is a much weaker brand than the iPad, is making significant headway. I think that by ditching Windows RT and instead choosing to focus on the full-fat version of Windows, that Microsoft has made the better, braver choice, and is now being rewarded with strong sales.

It’s time for Apple to be brave and make a pro tablet running OS X.

http://www.ctvnews.ca/sci-tech/gravitational-wave-discovery-to-open-a-window-on-the-universe-1.2773762

Gravitational wave discovery to open ‘a window on the universe’

Published Thursday, February 11, 2016 10:00AM EST
Last Updated Thursday, February 11, 2016 10:51AM EST

U.S. physicists say they’ve detected gravitational waves for the very first time, marking a discovery that proves one of Albert Einstein’s last unverified theories about the universe.

Einstein theorized that gravitational waves are tiny ripples in the fabric of space-time created by all objects moving through time and space. Researchers from the Laser Interferometer Gravitational-Wave Observatory revealed Thursday that they’ve detected some of these ripples, created by the collision of two gigantic black holes.

“We did it,” LIGO laboratory executive director David Reitze said at an announcement at the U.S. National Science Foundation.

The discovery is expected to open up a whole new avenue for physicists to examine the nature and history of the universe, by allowing them to observe the gravitational properties of massive bodies in space.

Reitze compared the impact of the discovery to Galileo’s invention of the telescope.

“We’re opening a window on the universe,” he said.

Reitze said the gravitational wave was detected at the LIGO facility in Louisiana, on Sept. 14, 2015.

The discovery comes 100 years after Einstein first predicted the existence of gravitational waves, as part of his theory of relativity.

What is a gravitational wave?

The simplest way to grasp the idea of gravitational waves is to picture all of space-time as a big, stretchy trampoline. (Space-time, by the way, is our three-dimensional existence, plus time. Just as something can be located at an X, Y and Z coordinate in the three-dimensional world, it also has a time coordinate in space-time).

If all of space-time is like a trampoline, then a large object, such as the sun, is like a bowling ball weighing down a spot on the trampoline. For comparison’s sake, the Earth would be like a marble, spiraling in circles around the large depression made by the bowling ball (i.e. the sun). These objects send out gravitational waves as they move across space-time, like ripples moving through the fabric of a trampoline. They’re very, very minute, but they’re there, and scientists believe they’ve learned how to detect them.

But because the waves are so hard to detect, the LIGO researchers had to look for something that would make a massive wave, such as the collision of two black holes. It would be like putting two elephants on the trampoline at the same time.

How did they detect the waves?

According to Einstein’s theory, gravitational waves stretch and squeeze reality ever so slightly, so that we can’t even see it happening. However, light doesn’t play by the same rules, so it actually appears to warp as it travels past large objects – although it’s actually reality changing, not the light. Astronomer Arthur Eddington confirmed that element of Einstein’s theory in 1919, when he observed light “bending” as it travelled past the sun.

The LIGO project is based on this theory that a gravitational wave will bend all of reality, except light. The LIGO set up twin detectors in Livingston, La., and Hanford, Wash., with lasers shining down four-kilometre tunnels. Researchers then waited for a massive gravitational wave to pass through Earth, warping space-time ever so slightly, and thereby changing the relative distance traveled by the lasers. According to Einstein’s theory, the change would be incredibly minute, so the instruments had to be very precise in order to detect and measure it.

Is it for real?

The scientific community has already had a false positive when it comes to detecting gravitational waves, so scrutiny of this new discovery will be intense.

In 2014, a team of Harvard researchers claimed to have detected gravitational waves triggered by the so-called “Big Bang” that theoretically started the universe. However, that discovery was debunked early last year, when closer analysis revealed that cosmic dust was responsible for the phenomenon.

On Thursday, Reitze said his team is convinced of the accuracy of the evidence they’ve collected.

“It took us months of careful checking, rechecking, analysis, (and) looking at every piece of data,” he said. “We’ve convinced ourselves.”