Sperm counts of men in North America, Europe, Australia and New Zealand are plunging, according to a new analysis published Tuesday.
Sperm counts of men in North America, Europe, Australia and New Zealand are plunging, according to a new analysis published Tuesday.
A group of scientists in Oregon has successfully modified the genes of embryos using CRISPR, a cut-and-paste gene-editing tool.
The experiments, which have not yet been subject to peer review, were conducted by biologist Shoukhrat Mitalipov and colleagues at Oregon Health & Science University in Portland, MIT Technology Review reported. Mitalipov conducted the experiments on dozens of single-celled embryos, which were discarded before they could progress very far in development, according to Technology Review. This is the first time that scientists in the United States have used this approach to edit the genes of embryos.
The CRISPR/Cas9 gene-editing system is a simple “cut and replace” method for editing precise spots on the genome. CRISPRS are long stretches of DNA that are recognized by molecular “scissors” called Cas9; by inserting CRISPR DNA near target DNA, scientists can theoretically tell Cas9 to cut anywhere in the genome. Scientists can then swap a replacement gene sequence in the place of the snipped sequence. The replacement sequence then gets automatically incorporated into the genome by natural DNA repair mechanisms.
In 2015, a group in China used CRISPR to edit several human embryos that had severe defects, though none were allowed to gestate very long before being discarded. If rumors are to be believed, the new results are more promising than those earlier efforts, according to Technology Review. The Chinese technique led to genetic changes in some, but not all of the cells in the embryos, and CRISPR sometimes snipped out the wrong place in the DNA. According to Technology Review, the new technique was used in dozens of embryos that were created for in vitro fertilization (IVF), using the sperm of men who had severe genetic defects.
In general, editing the germ line — meaning sperm, eggs or embryos — has been controversial, because it means permanently changing the DNA that is passed on from one generation to the next. Some scientists have called for a ban on germ-line editing, saying the approach is incredibly risky and ethically dubious.
The adhesive, described today (July 27) in a new study in the journal Science, sticks to wet surfaces, including the surface of a beating heart. It isn’t toxic to cells, which gives it an advantage over many surgical glues. It’s not available in operating rooms just yet — its developers say that could take years — but it could potentially be approved much more quickly for applications such as closing skin wounds.
The slug-inspired glue is “very stretchy and very tough,” said Jianyu Li, a postdoctoral researcher at Harvard University’s Wyss Institute for Biologically Inspired Engineering and the lead author of the study. Li and his colleagues applied the adhesive to a blood-soaked, beating pig heart and found that it worked better than any other surgical glue on the market.
The inspiration for the glue came from Arion subfuscus, a large and slimy species of slug found in North America and western Europe. These slugs excrete a sticky, yellow-orange slime that adheres well to wet surfaces.
That characteristic intrigued Li and his colleagues, and they set to work making an artificial version of the slime. The key, Li told Live Science, is that the slime is made up of long, straight chains of molecules called polymers, which are also bound to each other — a phenomenon called cross-linking. Cross-linking makes materials strong, but the slug slime has the added advantage of having two types of cross-link bonds. Some were covalent bonds, which means they hold molecules together by sharing electrons. Others were ionic bonds, meaning one molecule hands over its electrons to another. These “hybridized” cross-links make the slug mucusboth tough and stretchy, Li said.
The team mimicked this structure using artificial polymers layered onto what they called a “dissipative matrix.” The polymers provide the sticking power, Li explained, while the dissipative-matrix layer acts like a shock absorber: It can stretch and deform without rupturing.
To test the glue, the researchers applied it to pig skin, cartilage, arteries, liver tissue and hearts — including hearts that were inflated with water or air and covered in blood. The material proved extremely stretchable, expanding 14 times its original length without ever breaking loose from the liver tissue. When used to patch a hole in a pig heart, the adhesive maintained its seal even when it was stretched to twice its original length tens of thousands of times, at pressures exceeding normal human blood pressure.
The researchers even applied the adhesive to the beating heart of a real pig and found that the adhesion to the dancing, bloody surface was about eight times as strong as the adhesion of any commercially available surgical glue.
The glue was also tested in a living rat: The researchers simulated an emergency surgery by slicing the rats’ liver tissue and then patching the wound with either the glue or a standard blood-staunching product called Surgiflo. They found that the new adhesive was as good at stopping the blood flow as the standard glue; the rats treated with the new glue experienced no additional hemorrhaging up to two weeks after the surgery. The Surgiflo-treated rats, however, sometimes suffered from tissue death and scar tissue, the researchers reported. The rats treated with the slime-inspired glue did not experience these side effects.
Whether the new glue makes it to the operating room depends on much more extensive clinical testing, Li said, but the adhesive could make its debut as a new method of dressing external wounds on a shorter timeline than that.
“We have a company working on trying to push our material to clinical applications, and we have a patent pending,” Li said.
Robots are reliable in industrial settings, where recognizable objects appear at predictable times in familiar circumstances. But life at home is messy. Put a robot in a house, where it must navigate unfamiliar territory cluttered with foreign objects, and it’s useless.
Now researchers have developed a new computer vision algorithm that gives a robot the ability to recognize three-dimensional objects and, at a glance, intuit items that are partially obscured or tipped over, without needing to view them from multiple angles.
“It sees the front half of a pot sitting on a counter and guesses there’s a handle in the rear and that might be a good place to pick it up from,” said Ben Burchfiel, a Ph.D. candidate in the field of computer vision and robotics at Duke University.
In experiments where the robot viewed 908 items from a single vantage point, it guessed the object correctly about 75 percent of the time. State-of-the-art computer vision algorithms previously achieved an accuracy of about 50 percent.
Burchfiel and George Konidaris, an assistant professor of computer science at Brown University, presented their research last week at the Robotics: Science and Systems Conference in Cambridge, Massachusetts.
Like other computer vision algorithms used to train robots, their robot learned about its world by first sifting through a database of 4,000 three-dimensional objects spread across ten different classes — bathtubs, beds, chairs, desks, dressers, monitors, night stands, sofas, tables, and toilets.
While more conventional algorithms may, for example, train a robot to recognize the entirety of a chair or pot or sofa or may train it to recognize parts of a whole and piece them together, this one looked for how objects were similar and how they differed.
When it found consistencies within classes, it ignored them in order to shrink the computational problem down to a more manageable size and focus on the parts that were different.
For example, all pots are hollow in the middle. When the algorithm was being trained to recognize pots, it didn’t spend time analyzing the hollow parts. Once it knew the object was a pot, it focused instead on the depth of the pot or the location of the handle.
“That frees up resources and makes learning easier,” said Burchfiel.
Extra computing resources are used to figure out whether an item is right-side up and also infer its three-dimensional shape, if part of it is hidden. This last problem is particularly vexing in the field of computer vision, because in the real world, objects overlap.
To address it, scientists have mainly turned to the most advanced form of artificial intelligence, which uses artificial neural networks, or so-called deep-learning algorithms, because they process information in a way that’s similar to how the brain learns.
Although deep-learning approaches are good at parsing complex input data, such as analyzing all of the pixels in an image, and predicting a simple output, such as “this is a cat,” they’re not good at the inverse task, said Burchfiel. When an object is partially obscured, a limited view — the input — is less complex than the output, which is a full, three-dimensional representation.
The algorithm Burchfiel and Konidaris developed constructs a whole object from partial information by finding complex shapes that tend to be associated with each other. For instance, objects with flat square tops tend to have legs. If the robot can only see the square top, it may infer the legs.
“Another example would be handles,” said Burchfeil. “Handles connected to cylindrical drinking vessels tend to connect in two places. If a mug shaped object is seen with a small nub visible, it is likely that that nub extends into a curved, or square, handle.”
Once trained, the robot was then shown 908 new objects from a single viewpoint. It achieved correct answers about 75 percent of the time. Not only was the approach more accurate than previous methods, it was also very fast. After a robot was trained, it took about a second to make its guess. It didn’t need to look at the object from different angles and it was able to infer parts that couldn’t be seen.
This type of learning gives the robot a visual perception that’s similar to the way humans see. It interprets objects with a more generalized sense of the world, instead of trying to map knowledge of identical objects onto what it’s seeing.
Burchfiel said he wants to build on this research by training the algorithm on millions of objects and perhaps tens of thousands of types of objects.
“We want to build this is into single robust system that could be the baseline behind a general robot perception scheme,” he said.
Toys that teach kids to code are as hot in 2017 as Cabbage Patch Kids were in 1983, and for good reason. For today’s generation of children, learning how to program is even more important than studying a second language. Though there are many robot kits on the market that are designed for this purpose, Lego Boost is the best tech-learning tool we’ve seen for kids. Priced at a very reasonable $159, Boost provides the pieces to build five different robots, along with an entertaining app that turns learning into a game that even preliterate children can master.
Boost comes with a whopping 847 different Lego bricks, along with one motor (which also serves as a dial control on some projects), one light/IR sensor and the Move Hub, a large white and gray brick with two built-in motors that serves as the central processing unit for the robot. The Hub connects to your tablet via Bluetooth, to receive your programming code, and to the other two electronic components via wires.
You can build five different robots with the kit: a humanoid robot named Vernie, Frankie the Cat, the Guitar 4000 (which plays real music), a forklift called the “M.I.R. 4” and a robotic “Auto Builder” car factory. Lego said that it expects most users to start with Vernie, who looks like a cross between film robots Johnny No. 5 and Wall-E and offers the most functionality.
To get started building and coding, kids have to download the Boost app to their iPad or Android tablets. You’ll need to have the app running and connected to the Move hub every time you use the robot. All of the processing and programming takes place on your mobile device, and the sound effects (music, the robot talking) will come out of your tablet’s speaker, not the robot itself.
Lego really understands how young children learn and has designed the perfect interface for them. The Boost app strikes a balance among simplicity, depth and fun. Boost is officially targeted at 7- to 12-year-olds, but the software is so intuitive and engaging that, within minutes of seeing the system, my 5-year-old was writing his own programs and begging me to extend his bedtime so he could discover more.
Neither the interface nor the block-based programming language contains any written words, so even children who can’t read can use every feature of the app. When you launch Boost, you’re first shown a cartoonish menu screen that looks like a room with all the different possible robots sitting in different spots. You just tap on the image of the robot you want to build or program, and you’re given a set of activities that begin with building the most basic parts of the project and coding them.
As you navigate through the Boost program, you need to complete the simplest levels within each robot section before you can unlock the more complicated ones. Any child who has played video games is familiar with and motivated by the concept of unlocking new features by successfully completing old ones. This level-based system turns the entire learning process into a game and also keeps kids from getting frustrated by trying advanced concepts before they’re ready.
Boost runs on modern iPads or Android devices that have at least a 1.4-GHz CPU, 1GB of RAM, Bluetooth LE, and Android 5.0 or above. (I also downloaded Boost to a smartphone, but the screen was so small that it was difficult to make out some of the diagrams.)
Unfortunately, Lego doesn’t plan to list the program in Amazon’s app store, which means you can’t easily use Boost with a Fire tablet, which is the top-selling tablet in the U.S. I was able to sideload Boost onto my son’s Fire 7 Kids Edition, but most users won’t have the wherewithal to do that. Lego makes its Mindstorm app available to Fire devices, so we hope the company will eventually see fit to do the same with Boost.
When you load the Boost app for the first time, you need to complete a simple project that involves making a small buggy before you can build any of the five robots. This initial build is pretty fast, because it involves only basic things like putting wheels onto the car, programming it to move forward and attaching a small fan in the back.
Like the robot projects that come after it, the buggy build is broken down into three separate challenges, each of which builds on the prior one. The first challenge involves building the buggy and programming it to roll forward. Subsequent challenges involve programming the vehicle’s infrared sensor and making the fan in the back move.
After you’ve completed all three buggy challenges, the five regular robots are unlocked. Each robot has several levels within it, each of which contains challenges that you must complete. For example, Vernie’s first level has three challenges that help you build him and use his basic functions, while the second level has you add a rocket launcher to his body and program him to shoot.
If a challenge includes building or adding blocks to a robot, it gives you step-by-step instructions that show you which blocks go where, and only after you’ve gone through these steps do you get to the programming portion.
When it’s time to code, the app shows animations of a finger dragging the coding blocks from a palette on the bottom of the screen up onto the canvas, placing them next to each other and hitting a play button to run the program. This lets the user know exactly what to do at every step, but also offers the ability to experiment by modifying the programs at the end of each challenge.
In Vernie’s case, each of the first-level challenges involve building part of his body. Lego Design Director Simon Kent explained to us that, because a full build can take hours, the company wants children to be able to start programming before they’re even finished. So, in the first challenge, you build the head and torso, then program him to move his neck, while in the later ones, you add his wheels and then his arms.
Like almost all child-coding apps, Boost uses a pictorial, block-based programming language that involves dragging interlocking pieces together, rather than keying in text. However, unlike some programming kits we’ve seen, which require you to read text on the blocks to find out what they do, Boost’s system is completely icon-based, making it ideal for children who can’t read (or can’t read very well) yet.
For example, instead of seeing a block that says, “Move Forward” or “Turn right 90 degrees,” you see blocks with arrows on them. All of the available blocks are located on a palette at the bottom of the screen; you drag them up onto the canvas and lock them together to write programs.
Some of the icons on the blocks are less intuitive than an arrow or a play button, but Boost shows you (with an animation) exactly which blocks you need in order to complete each challenge. It then lets you experiment with additional blocks to see what they do.
What makes the app such a great learning tool is that it really encourages and rewards discovery. In one of the first Vernie lessons, there were several blocks with icons showing the robot’s head at different angles. My son was eager to drag each one into a program to see exactly what it did (most turned the neck).
Programs can begin with either a play button, which just means “start this action” or a condition such as shaking Vernie’s hand or putting an object in front of the robot’s infrared sensor. You can launch a program, either by tapping on its play/condition button or on the play button in the upper right corner of the screen, which runs every program you have on screen at once.
Because the programs are mostly so simple, there are many reasons why you might want to have several running at once. For example, when my son was programming for the guitar robot, he had a program that played a sound when the slider on the neck passed over the red tiles, another one for when it passed over the green tiles and yet another for the blue tiles. In a complex adult program, these would be handled by an if/then statement, but in Boost, there are few loops (you can use them in the Creative Canvas free-play mode if you want), so making several separate programs is necessary.
While the program(s) run, each block lights up as it executes, so you know exactly what’s going on at any time. You can even add and remove blocks, and the programs will keep on executing. I wish all the adult programming tools I use at work had these features!
Though you write programs as part of each the challenges, if you really want to get creative, you need to head to the Coding Canvas mode. In each robot’s menu, to the right of the levels, there’s a red toolbox that you can tap on to write your own custom programs. As you complete different challenges that feature new functions, your Coding Canvas toolbox gets filled up with more code blocks that you can use.
My son had an absolute blast using the Guitar 4000’s toolbox mode to write a program in which moving the slider over the different colors on the guitar neck would play different clips of his voice.
Users who want to build their own custom robots and program them can head over to the Creative Canvas free-play mode by tapping on the open-window picture on the main menu. There, you can create new programs with blocks that control exactly what the Move Hub, IR sensor and motor do. So, rather than showing an icon with a block of a guitar playing like it does from within the Guitar 4000 menus, Boost shows a block with a speaker on it, because you can choose any type of sound from your custom robot.
In both Creative Canvas and Coding Canvas modes, Lego makes it easy to save your custom programs. The software automatically assigns names (which, coincidentally, are the names of famous Lego characters) and colorful icons to each of your programs for you, but children who can read and type are free to alter the names. All changes to programs are autosaved, so you never have to worry about losing your work.
As you might expect from Lego, Boost offers a best-in-class building experience with near-infinite expandability and customization. The kit comes with 847 Lego pieces, which include a combination of traditional-style bricks, with their knobs and grooves, and Technics-style bricks that use holes and plugs.
The building process for any of the Boost robots (Vernie, Frankie the Cat, M.I.R. 4, Guitar 4000 and Auto Builder) is lengthy but very straightforward. During testing, we built both Vernie and the Guitar 4000 robots, and each took around 2 hours for adults to complete. Younger kids, who have less patience and worse hand-eye coordination, will probably need help from an adult or older child, but building these bots provides a great opportunity for parent/child bonding time. My 5 year old (2 years below the recommended age) and I had a lot of fun putting the guitar together.
As part of the first challenge (or first several challenges), the app gives you a set of step-by-step instructions that show which bricks to put where. The illustrated instruction screens are very detailed and look identical to the paper Lego instructions you may have seen on any of the company’s kits. I just wish that the app made these illustrations 3D so one could rotate them and see the build from different angles like you can on UBTech’s Jimu Robots kit app.
All of the bricks connect together seamlessly and will work with any other bricks you already own. You could also easily customize one of the five recommended Boost robots with your own bricks. Imagine adorning Varney’s body with pieces from a Star Wars set or letting your Batman minifig ride on the MIR 4 forklift.
I really love the sky-blue, orange and gray color scheme Lego chose for the bricks that come with Boost, because it has an aesthetic that looks both high-tech and fun. From the orange wings on the Guitar 4000 robot to Vernie’s funky eyebrows, everything about the blocks screams “fun” and “inviting.”
At $159, the Lego Boost offers more for the money than any of the other robot kits we’ve reviewed, but it’s definitely designed for younger children who are new to programming. Older children or those who’ve used Boost for a while can graduate to Lego’s own Mindstorm EV3 kits, which start at $349 and use their own block-based coding language.
Starting at $129, UBTech’s line of Jimu robots offer a few more sensors and motors than Boost, along with a more complex programming language, but they definitely target older and more experienced kids, and to get a kit that makes more than one or two robots, you need to spend over $300. Sony’s Koov kit is also a good choice for older and more tech-savvy children, but it’s also way more expensive than Boost (starts at $199, but you need to spend at least $349 to get most features), and its set of blocks is much less versatile than Legos.
Tenka Labs’ Circuit Cubes start at just $59 and provide a series of lights and motors that come with Lego-compatible bricks, but these kits teach electronics skills, not programming.
The best robot/STEM kit we’ve seen for younger children, Lego Boost provides turns coding into a game that’s so much fun your kids won’t even know that they’re gaining valuable skills. Because it uses real Legos, Boost also invites a lot of creativity and replayability, and at $159, it’s practically a steal.
It’s a shame that millions of kids who use Amazon Fire tablets are left out of the Boost party, but hopefully, Lego will rectify this problem in the near future. Parents of older children with more programming savvy might want to consider a more complex robot set such as Mindstorms or Koov, but if your kid is new to coding and has access to a compatible device, the Boost is a must-buy.
The announcement by researchers in Portland, Oregon that they’ve successfully modified the genetic material of a human embryo took some people by surprise.
With headlines referring to “groundbreaking” research and “designer babies,” you might wonder what the scientists actually accomplished. This was a big step forward, but hardly unexpected. As this kind of work proceeds, it continues to raise questions about ethical issues and how we should we react.
For a number of years now we have had the ability to alter genetic material in a cell, using a technique called CRISPR.
The DNA that makes up our genome comprises long sequences of base pairs, each base indicated by one of four letters. These letters form a genetic alphabet, and the “words” or “sentences” created from a particular order of letters are the genes that determine our characteristics.
Sometimes words can be “misspelled” or sentences slightly garbled, resulting in a disease or disorder. Genetic engineering is designed to correct those mistakes. CRISPR is a tool that enables scientists to target a specific area of a gene, working like the search-and-replace function in Microsoft Word, to remove a section and insert the “correct” sequence.
In the last decade, CRISPR has been the primary tool for those seeking to modify genes – human and otherwise. Among other things, it has been used in experiments to make mosquitoes resistant to malaria, genetically modify plants to be resistant to disease, explore the possibility of engineered pets and livestock, and potentially treat some human diseases (including HIV, hemophilia and leukemia).
Up until recently, the focus in humans has been on changing the cells of a single individual, and not changing eggs, sperm and early embryos – what are called the “germline” cells that pass traits along to offspring. The theory is that focusing on non-germline cells would limit any unexpected long-term impact of genetic changes on descendants. At the same time, this limitation means that we would have to use the technique in every generation, which affects its potential therapeutic benefit.
Earlier this year, an international committee convened by the National Academy of Sciences issued a report that, while highlighting the concerns with human germline genetic engineering, laid out a series of safeguards and recommended oversight. The report was widely regarded as opening the door to embryo-editing research.
That is exactly what happened in Oregon. Although this is the first study reported in the United States, similar research has been conducted in China. This new study, however, apparently avoided previous errors we’ve seen with CRISPR – such as changes in other, untargeted parts of the genome, or the desired change not occurring in all cells. Both of these problems had made scientists wary of using CRISPR to make changes in embryos that might eventually be used in a human pregnancy. Evidence of more successful (and thus safer) CRISPR use may lead to additional studies involving human embryos.
First, this study did not entail the creation of “designer babies,” despite some news headlines. The research involved only early stage embryos, outside the womb, none of which was allowed to develop beyond a few days.
In fact, there are a number of existing limits – both policy-based and scientific – that will create barriers to implanting an edited embryo to achieve the birth of a child. There is a federal ban on funding gene editing research in embryos; in some states, there are also total bans on embryo research, regardless of how funded. In addition, the implantation of an edited human embryos would be regulated under the federal human research regulations, the Food, Drug and Cosmetic Act and potentially the federal rules regarding clinical laboratory testing.
Beyond the regulatory barriers, we are a long way from having the scientific knowledge necessary to design our children. While the Oregon experiment focused on a single gene correction to inherited diseases, there are few human traits that are controlled by one gene. Anything that involves multiple genes or a gene/environment interaction will be less amenable to this type of engineering. Most characteristics we might be interested in designing – such as intelligence, personality, athletic or artistic or musical ability – are much more complex.
Second, while this is a significant step forward in the science regarding the use of the CRISPR technique, it is only one step. There is a long way to go between this and a cure for various disease and disorders. This is not to say that there aren’t concerns. But we have some time to consider the issues before the use of the technique becomes a mainstream medical practice.
Taking into account the cautions above, we do need to decide when and how we should use this technique.
Should there be limits on the types of things you can edit in an embryo? If so, what should they entail? These questions also involve deciding who gets to set the limits and control access to the technology.
We may also be concerned about who gets to control the subsequent research using this technology. Should there be state or federal oversight? Keep in mind that we cannot control what happens in other countries. Even in this country it can be difficult to craft guidelines that restrict only the research someone finds objectionable, while allowing other important research to continue. Additionally, the use of assisted reproductive technologies (IVF, for example) is largely unregulated in the U.S., and the decision to put in place restrictions will certainly raise objections from both potential parents and IVF providers.
Moreover, there are important questions about cost and access. Right now most assisted reproductive technologies are available only to higher-income individuals. A handful of states mandate infertility treatment coverage, but it is very limited. How should we regulate access to embryo editing for serious diseases? We are in the midst of a widespread debateabout health care, access and cost. If it becomes established and safe, should this technique be part of a basic package of health care services when used to help create a child who does not suffer from a specific genetic problem? What about editing for nonhealth issues or less serious problems – are there fairness concerns if only people with sufficient wealth can access?
So far the promise of genetic engineering for disease eradication has not lived up to its hype. Nor have many other milestones, like the 1996 cloning of Dolly the sheep, resulted in the feared apocalypse. The announcement of the Oregon study is only the next step in a long line of research. Nonetheless, it is sure to bring many of the issues about embryos, stem cell research, genetic engineering and reproductive technologies back into the spotlight. Now is the time to figure out how we want to see this gene-editing path unfold.
The U.S. Food and Drug Administration aims to reduce nicotine levels in cigarettes while exploring measures to move smokers toward e-cigarettes, in a major regulatory shift announced on Friday that sent traditional cigarette company stocks plunging.
The move means FDA Commissioner Scott Gottlieb has thrown his regulatory weight on the side of those advocating for e-cigarettes in the debate over whether they potentially hold some public health benefits.
Shares of major tobacco companies in the United States and UK slumped in heavy trading volume, with the world’s biggest producers poised to lose about $60 billion of market value.
The FDA’s move extends the timeline for applications for new e-cigarette clearance by the FDA to Aug. 8, 2022, giving e-cigarette companies more time to keep their products on the market before the agency goes into the process of final review. It also gives the FDA more time to set the proper framework for regulating e-cigarettes.
“It’s hard to overstate what this could mean for the companies affected: non-addictive levels of nicotine would likely mean a lot fewer smokers and of those people who do still light up, smoking a lot less,” said Neil Wilson, a senior market analyst with ETX Capital in London.
“This is just the U.S. regulator acting but we can easily see others, particularly in Europe, where regulatory pressures are already extremely high, following suit,” Wilson said.
British American Tobacco shares, trading close to all-time highs, fell as much as 11 percent and were on track for their biggest one-day loss in nearly 18 years.
Altria, which makes the Marlboro brand of cigarettes, fell as much as 16 percent, slipping into the red for the year
It’s not every day that medical studies say alcohol could be good for you. People who drink moderately often have a lower risk of developing diabetes than those who never drink, according to a new study published in Diabetologia, the journal of the European Association for the Study of Diabetes.
Satellites can now set up quantum communications links through the air during the day instead of just at night, potentially helping a nigh-unhackable space-based quantum Internet to operate 24/7, a new study from Chinese scientists finds.
Quantum cryptography exploits the quantum properties of particles such as photons to help encrypt and decrypt messages in a theoretically unhackable way. Scientists worldwide are now endeavoring to develop satellite-based quantum communications networks for a global real-time quantum Internet.
However, prior experiments with long-distance quantum communications links through the air were mostly conducted at night because sunlight serves as a source of noise. Previously, “the maximum range for daytime free-space quantum communication was 10 kilometers,” says study co-senior author Qiang Zhang, a quantum physicist at the University of Science and Technology of China, in Shanghai.
Now researchers led by quantum physicist Jian-Wei Pan at the University of Science and Technology of China, at Hefei, have successfully established 53-kilometer quantum cryptography links during the day between two ground stations. This research suggests that such links could work between a satellite and either a ground station or another satellite, they say.
To overcome interference from sunlight, the researchers switched from the roughly 700- to 900-nanometer wavelengths of light used in all prior day-time free-space experiments to roughly 1,550 nm. The sun is about one-fifth as bright at 1,550 nm as it is at 800 nm, and 1,550-nm light can also pass through Earth’s atmosphere with virtually no interference. Moreover, this wavelength is also currently widely used in telecommunications, making it more compatible with existing networks.
Previous research was reluctant to use 1,550-nm light because of a lack of good commercial single-photon detectors capable of working at this wavelength. But the Shanghai group developed a compact single-photon detector for 1,550-nm light that could work at room temperature. Moreover, the scientists developed a receiver that needed less than one tenth of the field of view that receivers for nighttime quantum communications links usually need to work. This limited the amount of noise from stray light by a factor of several hundred.
In experiments, the scientists repeatedly established quantum communications links across Qinghai Lake, the biggest lake in China, from 3:30 p.m. to 5 p.m. local time on several sunny days, achieving transmission rates of 20 to 400 bits per second. Furthermore, they could establish these links despite roughly 48 decibels of loss in their communications channel, which is more than the roughly 40 to 45 dB of loss typically seen in communications channels between satellites and the ground and between low-Earth-orbit satellites, Zhang says. In comparison, previous daytime free-space quantum communications experiments could accommodate roughly only 20 dB of noise.
The researchers note that their experiments were performed in good weather, and that quantum communication is currently not possible in bad weather with today’s technology. Still, they note that bad weather is a problem only for ground-to-space links, and that it would not pose a problem for links between satellites.
In the future, the researchers expect to boost transmission rates and distance using better single-photon detectors, perhaps superconducting ones. They may also seek to exploit the quantum phenomenon known as entanglement to carry out more sophisticated forms of quantum cryptography, although this will require generating very bright sources of entangled photons that can operate in a narrow band of wavelengths, Zhang says.
Researchers have made a low-cost smart glove that can translate the American Sign Language alphabet into text and send the messages via Bluetooth to a smartphone or computer. The glove can also be used to control a virtual hand.
While it could aid the deaf community, its developers say the smart glove could prove really valuable for virtual and augmented reality, remote surgery, and defense uses like controlling bomb-diffusing robots.
This isn’t the first gesture-tracking glove. There are companies pursuing similar devices that recognize gestures for computer control, à la the 2002 film Minority Report. Some researchers have also specifically developed gloves that convert sign language into text or audible speech.
What’s different about the new glove is its use of extremely low-cost, pliable materials, says developer Darren Lipomi, a nanoengineering professor at the University of California, San Diego. The total cost of the components in the system reported in the journal PLOS ONE cost less than US $100, Lipomi says. And unlike other gesture-recognizing gloves, which use MEMS sensors made of brittle materials, the soft stretchable materials in Lipomi’s glove should make it more robust.
The key components of the new glove are flexible strain sensors made of a rubbery polymer. Lipomi and his team make the sensors by cutting narrow strips from a super-thin film of the polymer and coating them with conductive carbon paint.
Then they use a stretchy glue to attach nine sensors on the knuckles of an athletic leather glove, two on each finger and one on the thumb. Thin, stainless steel threads connect each sensor to a circuit board attached at the wrist. The board also has an accelerometer and a Bluetooth transmitter.
When the wearer bends their fingers, the sensors stretch and the electrical resistance across them goes up. Based on these resistance signals, the circuit assigns a digital bit to each knuckle, 0 for relaxed and 1 for bent. This creates a nine-bit code for each hand gesture of the ASL alphabet. So if all fingers are straight, the code reads 000000000; for a fist it would be 111111111.
To distinguish between ASL letters that generate the same code, the researchers incorporated an accelerometer and pressure sensors on the glove. The letters D and Z, for instance, have the same gesture but the hand zigzags for Z while it remains still for D. In U and V, meanwhile, two fingers are held together and apart respectively, which the pressure sensor detects.
In tests, the glove could translate all 26 letters of the American Sign Language alphabet into text. The research team also used the glove to control a virtual hand to sign the ASL letters.
The next version of the glove will incorporate new materials that generate a tactile response so that wearers can feel what they’re touching in virtual reality. Today’s haptic devices simulate the sense of touch by applying forces and vibrations to the user. Lipomi and his students plan to convey a much broader range of signals. “We’re synthesizing materials that can be used to stimulate everything from pressure and temperature to stickiness and sliminess,” he says.