Page 2,367«..1020..2,3662,3672,3682,369..2,3802,390..»

Resolving the black hole ‘fuzzball or wormhole’ debate – The Ohio State University News

Black holes really are giant fuzzballs, a new study says.

The study attempts to put to rest the debate over Stephen Hawkings famous information paradox, the problem created by Hawkings conclusion that any data that enters a black hole can never leave. This conclusion accorded with the laws of thermodynamics, but opposed the fundamental laws of quantum mechanics.

What we found from string theory is that all the mass of a black hole is not getting sucked in to the center, said Samir Mathur, lead author of the study and professor of physics at The Ohio State University. The black hole tries to squeeze things to a point, but then the particles get stretched into these strings, and the strings start to stretch and expand and it becomes this fuzzball that expands to fill up the entirety of the black hole.

The study, published Dec. 28 in the Turkish Journal of Physics, found that string theory almost certainly holds the answer to Hawkings paradox, as the papers authors had originally believed. The physicists proved theorems to show that the fuzzball theory remains the most likely solution for Hawkings information paradox. The researchers have also published an essay showing how this work may resolve longstanding puzzles in cosmology; the essay appeared in December in the International Journal of Modern Physics.

Mathur published a study in 2004 that theorized black holes were similar to very large, very messy balls of yarn fuzzballs that become larger and messier as new objects get sucked in.

The bigger the black hole, the more energy that goes in, and the bigger the fuzzball becomes, Mathur said. The 2004 study found that string theory, the physics theory that holds that all particles in the universe are made of tiny vibrating strings, could be the solution to Hawkings paradox. With this fuzzball structure, the hole radiates like any normal body, and there is no puzzle.

After Mathurs 2004 study and other, similar works, many people thought the problem was solved, he said. But in fact, a section of people in the string theory community itself thought they would look for a different solution to Hawkings information paradox. They were bothered that, in physical terms, the whole structure of the black hole had changed.

Studies in recent years attempted to reconcile Hawkings conclusions with the old picture of the hole, where one can think of the black hole as being empty space with all its mass in the center. One theory, the wormhole paradigm, suggested that black holes might be one end of a bridge in the space-time continuum, meaning anything that entered a black hole might appear on the other end of the bridge the other end of the wormhole in a different place in space and time.

In order for the wormhole picture to work, though, some low-energy radiation would have to escape from the black hole at its edges.

This recent study proved a theorem the effective small corrections theorem to show that if that were to happen, black holes would not appear to radiate in the way that they do.

The researchers also examined physical properties from black holes, including topology change in quantum gravity, to determine whether the wormhole paradigm would work.

In each of the versions that have been proposed for the wormhole approach, we found that the physics was not consistent, Mathur said. The wormhole paradigm tries to argue that, in some way, you could still think of the black hole as being effectively empty with all the mass in the center. And the theorems we prove show that such a picture of the hole is not a possibility.

Other Ohio State researchers who worked on this study include Madhur Mehta, Marcel R. R. Hughes and Bin Guo.

More here:

Resolving the black hole 'fuzzball or wormhole' debate - The Ohio State University News

Read More..

Science Inquiry Lecture: Quantum Materials and the MonArk Quantum Foundry – Montana State University

Winter/Spring2022Virtual Science Inquiry Lecture SeriesExplore cutting-edge science topics, their latest developments, and their relevance to society. Sponsored by the Gallatin Valley Friends of the Sciences, and co-sponsored by the non-profit community service organization Hopa Mountain andMuseum of the Rockies, talks for the 2022 winter/spring serieswill be presented virtually via the Zoom video conferencing platformon Wednesday evenings at 7 pm, followed by a brief question-and-answer period using the Zoom chat function.

Quantum Materials and the MonArk Quantum FoundryHow does quantum mechanics work, and how can it be applied to practical, everyday use?Dr. Yves Idzerda, MSU Professor of Physics and Dean of the College of Letters and Science, will discuss the latest advances in quantum technology and how MSUs NSF-funded quantum foundry will research and develop quantum materials and devices that will connect science and industry.

Free and open to the public via Zoom.Visit theGallatin Valley Friends of the Sciences websitefor the Zoom link for this lecture.

View post:

Science Inquiry Lecture: Quantum Materials and the MonArk Quantum Foundry - Montana State University

Read More..

What We Are Reading Today: The End of Ambition by Mark Atwood Lawrence – Arab News

A provocative and revelatory look at what power is, who gets it, and what happens when they do, based on over 500 interviews with those who (for a while, at least) have had the upper handfrom the creator of the Power Corrupts podcast and Washington Post columnist Brian Klaas.

Does power corrupt, or are corrupt people drawn to power? Are entrepreneurs who embezzle and cops who kill the result of poorly designed systems or are they just bad people? Are tyrants made or born? If you were suddenly thrust into a position of power, would you be able to resist the temptation to line your pockets or seek revenge against your enemies?

To answer these questions, Corruptible draws on over 500 interviews with some of the worlds top leadersfrom the noblest to the dirtiestincluding presidents and philanthropists as well as rebels, cultists, and dictators.

Some of the fascinating insights include: how facial appearance determines who we pick as leaders, why narcissists make more money, why some people dont want power at all and others are drawn to it out of a psychopathic impulse, and why being the beta (second in command) may actually be the optimal place for health and well-being.

Original post:

What We Are Reading Today: The End of Ambition by Mark Atwood Lawrence - Arab News

Read More..

How A.I. is set to evolve in 2022 – CNBC

An Ubtech Walker X Robot plays Chinese chess during 2021 World Artificial Intelligence Conference (WAIC) at Shanghai World Expo Center on July 8, 2021 in Shanghai, China.

VCG | VCG via Getty Images

Machines are getting smarter and smarter every year, but artificial intelligence is yet to live up to the hype that's been generated by some of the world's largest technology companies.

AI can excel at specific narrow tasks such as playing chess but it struggles to do more than one thing well. A seven-year-old has far broader intelligence than any of today's AI systems, for example.

"AI algorithms are good at approaching individual tasks, or tasks that include a small degree of variability," Edward Grefenstette, a research scientist at Meta AI, formerly Facebook AI Research, told CNBC.

"However, the real world encompasses significant potential for change, a dynamic which we are bad at capturing within our training algorithms, yielding brittle intelligence," he added.

AI researchers have started to show that there are ways to efficiently adapt AI training methods to changing environments or tasks, resulting in more robust agents, Grefenstette said. He believes there will be more industrial and scientific applications of such methods this year that will produce "noticeable leaps."

While AI still has a long way to go before anything like human-level intelligence is achieved, it hasn't stopped the likes of Google, Facebook (Meta) and Amazon investing billions of dollars into hiring talented AI researchers who can potentially improve everything from search engines and voice assistants to aspects of the so-called "metaverse."

Anthropologist Beth Singler, who studies AI and robots at the University of Cambridge, told CNBC that claims about the effectiveness and reality of AI in spaces that are now being labeled as the metaverse will become more commonplace in 2022 as more money is invested in the area and the public start to recognize the "metaverse" as a term and a concept.

Singler also warned that there could be "too little discussion" in 2022 of the effect of the metaverse on people's "identities, communities, and rights."

Gary Marcus, a scientist who sold an AI start-up to Uber and is currently executive chairman of another firm called Robust AI, told CNBC that the most important AI breakthrough in 2022 will likely be one that the world doesn't immediately see.

"The cycle from lab discovery to practicality can take years," he said, adding that the field of deep learning still has a long way to go. Deep learning is an area of AI that attempts to mimic the activity in layers of neurons in the brain to learn how to recognize complex patterns in data.

Marcus believes the most important challenge for AI right now is to "find a good way of combining all the world's immense knowledge of science and technology" with deep learning. At the moment "deep learning can't leverage all that knowledge and instead is stuck again and again trying to learn everything from scratch," he said.

"I predict there will be progress on this problem this year that will ultimately be transformational, towards what I called hybrid systems, but that it'll be another few years before we see major dividends," Marcus added. "The thing that we probably will see this year or next is the first medicine in which AI played a substantial role in the discovery process."

One of the biggest AI breakthroughs in the last couple of years has come from London-headquartered research lab DeepMind, which is owned by Alphabet.

The company has successfully created AI software that can accurately predict the structure that proteins will fold into in a matter of days, solving a 50-year-old "grand challenge" that could pave the way for better understanding of diseases and drug discovery.

Neil Lawrence, a professor of machine learning at the University of Cambridge, told CNBC that he expects to see DeepMind target more big science questions in 2022.

Language models AI systems that can generate convincing text, converse with humans, respond to questions, and more are also set to improve in 2022.

The best-known language model is OpenAI's GPT-3 but DeepMind said in December that its new "RETRO" language model can beat others 25 times its size.

Catherine Breslin, a machine learning scientist who used to work on Amazon Alexa, thinks Big Tech will race toward larger and larger language models next year.

Breslin, who now runs AI consultancy firm Kingfisher Labs, told CNBC that there will also be a move toward models that combine vision, speech and language capability, rather than treat them as separate tasks.

Nathan Benaich, a venture capitalist with Air Street Capital and the co-author of the annual State of AI report, told CNBC that a new breed of companies will likely use language models to predict the most effective RNA (ribonucleic acid) sequences.

"Last year we witnessed the impact of RNA technologies as novel covid vaccines, many of them built on this technology, brought an end to nation-wide lockdowns," he said. "This year, I believe we will see a new crop of AI-first RNA therapeutic companies. Using language models to predict the most effective RNA sequences to target a disease of interest, these new companies could dramatically speed up the time it takes to discover new drugs and vaccines."

While a number of advancements could be around the corner, there are major concerns around the ethics of AI, which can be highly discriminative and biased when trained on certain datasets. AI systems are also being used to power autonomous weapons and to generate fake porn.

Verena Rieser, a professor of conversational AI at Heriot-Watt University in Edinburgh, told CNBC that there will be a stronger focus on ethical questions around AI in 2022.

"I don't know whether AI will be able to do much 'new' stuff by the end of 2022 but hopefully it will do it better," she said, adding that this means it would be fairer, less biased and more inclusive.

Samim Winiger, an independent AI researcher who used to work for a Big Tech firm, added that he believes there will be revelations around the use of machine learning models in financial markets, spying, and health care.

"It will raise major questions about privacy, legality, ethics and economics," he told CNBC.

See original here:
How A.I. is set to evolve in 2022 - CNBC

Read More..

This AI Software Nearly Predicted Omicrons Tricky Structure – WIRED

The way predictions raced ahead of experiments on Omicrons spike protein reflects a recent sea change in molecular biology brought about by AI. The first software capable of accurately predicting protein structures became widely available only months before Omicron appeared, thanks to competing research teams at Alphabets UK-based AI lab DeepMind and at the University of Washington.

Ford used both packages, but because neither was designed or validated for predicting small changes caused by mutations like those of Omicron, his results were more suggestive than definitive. Some researchers treated them with suspicion. But the fact that he could easily experiment with powerful protein prediction AI illustrates how the recent breakthroughs are already changing the ways biologists work and think.

Subramaniam says he received four or five emails from people proffering predicted Omicron spike structures while working towards his labs results. Quite a few did this just for fun, he says. Direct measurements of protein structure will remain the ultimate yardstick, Subramaniam says, but he expects AI predictions to become increasingly central to researchincluding on future disease outbreaks. Its transformative, he says.

These tools allow you to make an educated guess really quicklywhich is important in a situation like Covid.

Colby Ford, computational genetics researcher, University of North Carolina at Charlotte

Because a proteins shape determines how it behaves, knowing its structure can help all kinds of biology research, from studies of evolution to work on disease. In drug research, figuring out a protein structure can help reveal potential targets for new treatments.

Determining a proteins structure is far from simple. They are complex molecules assembled from instructions encoded in an organisms genome to serve as enzymes, antibodies, and much of the other machinery of life. Proteins are made from strings of molecules called amino acids that can fold into complex shapes that behave in different ways.

Deciphering a proteins structure traditionally involved painstaking lab work. Most of the roughly 200,000 known structures were mapped using a tricky process in which proteins are formed into a crystal and bombarded with x-rays. Newer techniques like the electron microscopy used by Subramaniam can be faster, but the process is still far from easy.

In late 2020, the long-standing hope that computers could predict protein structure from an amino acid sequence suddenly became real, after decades of slow progress. DeepMind software called AlphaFold proved so accurate in a contest for protein prediction that the challenges cofounder John Moult, a professor at University of Maryland, declared the problem solved. Having worked personally on this problem for so long, Moult said, DeepMinds achievement was a very special moment.

The moment was also frustrating for some scientists: DeepMind did not immediately release details of how AlphaFold worked. Youre in this weird situation where theres been this major advance in your field, but you cant build on it, David Baker, whose lab at University of Washington works on protein structure prediction, told WIRED last year. His research group used clues dropped by DeepMind to guide the design of open source software called RoseTTAFold, released in June, which was similar to but not as powerful as AlphaFold. Both are based on machine learning algorithms honed to predict protein structures by training on a collection of more than 100,000 known structures. The next month, DeepMind published details of its own work and released AlphaFold for anyone to use. Suddenly, the world had two ways to predict protein structures.

Minkyung Baek, a postdoctoral researcher in Bakers lab who led work on RoseTTAFold, says she has been surprised by how quickly protein structure predictions have become standard in biology research. Google Scholar reports that UW's and DeepMinds papers on their software have together been cited by more than 1,200 academic articles in the short time since they appeared.

Although predictions havent proven crucial to work on Covid-19, she believes they will become increasingly important to the response to future diseases. Pandemic-quashing answers wont spring fully formed from algorithms, but predicted structures can help scientists strategize. A predicted structure can help you put your experimental effort into the most important problems, Baek says. Shes now trying to get RoseTTAFold to accurately predict the structure of antibodies and invading proteins when bound together, which would make the software more useful to infectious disease projects.

Despite their impressive performance, protein predictors dont reveal everything about a molecule. They spit out a single static structure for a protein, and dont capture the flexes and wiggles that take place when it interacts with other molecules. The algorithms were trained on databases of known structures, which are more reflective of those easiest to map experimentally rather than the full diversity of nature. Kresten Lindorff-Larsen, a professor at the University of Copenhagen, predicts the algorithms will be used more frequently and will be useful, but says, We also as a field need to learn better when these methods fail.

See the article here:
This AI Software Nearly Predicted Omicrons Tricky Structure - WIRED

Read More..

Are we witnessing the dawn of post-theory science? – The Guardian

Isaac Newton apocryphally discovered his second law the one about gravity after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship one that could be expressed as an equation, F=ma and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebooks machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You cant lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that no theory, in a word. They just work and do so well. We witness the social effects of Facebooks predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were oversimplifications of reality. Soon, the old scientific method hypothesise, predict, test would be relegated to the dustbin of history. Wed stop looking for the causes of things and be satisfied with correlations.

With the benefit of hindsight, we can say that what Anderson saw is true (he wasnt alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. We have leapfrogged over our ability to even write the theories that are going to be useful for description, says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tbingen, Germany. We dont even know what they would look like.

But Andersons prediction of the end of theory looks to have been premature or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: whats the best way to acquire knowledge and where does science go from here?

The first reason is that weve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible. Think of the prejudice that has been documented in Googles search engines and Amazons hiring tools.

The second is that humans turn out to be deeply uncomfortable with theory-free science. We dont like dealing with a black box we want to know why.

And third, there may still be plenty of theory of the traditional kind that is, graspable by humans that usefully explains much but has yet to be uncovered.

So theory isnt dead, yet, but it is changing perhaps beyond recognition. The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts, says Tom Griffiths, a psychologist at Princeton University.

Griffiths has been using neural nets to help him improve on existing theories in his domain, which is human decision-making. A popular theory of how people make decisions when economic risk is involved is prospect theory, which was formulated by behavioural economists Daniel Kahneman and Amos Tversky in the 1970s (it later won Kahneman a Nobel prize). The idea at its core is that people are sometimes, but not always, rational.

In Science last June, Griffithss group described how they trained a neural net on a vast dataset of decisions people took in 10,000 risky choice scenarios, then compared how accurately it predicted further decisions with respect to prospect theory. They found that prospect theory did pretty well, but the neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.

These counter-examples were highly informative, Griffiths says, because they revealed more of the complexity that exists in real life. For example, humans are constantly weighing up probabilities based on incoming information, as prospect theory describes. But when there are too many competing probabilities for the brain to compute, they might switch to a different strategy being guided by a rule of thumb, say and a stockbrokers rule of thumb might not be the same as that of a teenage bitcoin trader, since it is drawn from different experiences.

Were basically using the machine learning system to identify those cases where were seeing something thats inconsistent with our theory, Griffiths says. The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints. A way to picture it might be as a branching tree of if then-type rules, which is difficult to describe mathematically, let alone in words.

What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.

Some scientists are comfortable with that, even eager for it. When voice recognition software pioneer Frederick Jelinek said: Every time I fire a linguist, the performance of the speech recogniser goes up, he meant that theory was holding back progress and that was in the 1980s.

Or take protein structures. A proteins function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given proteins action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isnt the lack of a theory that will stop drug designers using it. What AlphaFold does is also discovery, she says, and it will only improve our understanding of life and therapeutics.

Others are distinctly less comfortable with where science is heading. Critics point out, for example, that neural nets can throw up spurious correlations, especially if the datasets they are trained on are small. And all datasets are biased, because scientists dont collect data evenly or neutrally, but always with certain hypotheses or assumptions in mind, assumptions that worked their way damagingly into Googles and Amazons AIs. As philosopher of science Sabina Leonelli of the University of Exeter explains: The data landscape were using is incredibly skewed.

But while these problems certainly exist, Dayan doesnt think theyre insurmountable. He points out that humans are biased too and, unlike AIs, in ways that are very hard to interrogate or correct. Ultimately, if a theory produces less reliable predictions than an AI, it will be hard to argue that the machine is the more biased of the two.

A tougher obstacle to the new science may be our human need to explain the world to talk in terms of cause and effect. In 2019, neuroscientists Bingni Brunton and Michael Beyeler of the University of Washington, Seattle, wrote that this need for interpretability may have prevented scientists from making novel insights about the brain, of the kind that only emerges from large datasets. But they also sympathised. If those insights are to be translated into useful things such as drugs and devices, they wrote, it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry.

Explainable AI, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?

Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data and hence scanning time to produce such an image, which isnt necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopras group has done so. But radiologists and patients remain wedded to the image. We humans are more comfortable with a 2D image that our eyes can interpret, he says.

The final objection to post-theory science is that there is likely to be useful old-style theory that is, generalisations extracted from discrete examples that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.

In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step the core of the creative process. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights new generalisations in the mathematics of knots.

In 2022, therefore, there is almost no stage of the scientific process where AI hasnt left its footprint. And the more we draw it into our quest for knowledge, the more it changes that quest. Well have to learn to live with that, but we can reassure ourselves about one thing: were still asking the questions. As Pablo Picasso put it in the 1960s, computers are useless. They can only give you answers.

See more here:
Are we witnessing the dawn of post-theory science? - The Guardian

Read More..

QAD Partners with MothersonSumi INfotech & Designs Limited (MIND) to Sell and Deliver Services – Business Wire

SANTA BARBARA, Calif.--(BUSINESS WIRE)--QAD Inc., a leading provider of next-generation manufacturing and supply chain solutions in the cloud, today announced it has added MothersonSumi INfotech & Designs Limited (MIND) as part of its growing global partner ecosystem. MIND is part of the technology and industrial solutions division of Motherson Group. Under the terms of the partnership, MIND will sell, implement and support the QAD Adaptive Applications portfolio of cloud solutions, including QAD Adaptive ERP, in the APAC and EMEA geographic regions.

"MIND has been implementing and supporting QAD solutions at many of Motherson Group companies worldwide and as a result has gained deep knowledge and expertise in QAD. This proven success will be extremely valuable for our current and future customers," said QAD Vice President, Global Partner Strategy and Management Mohan Ponnudurai. Expanding this relationship further, combined with its manufacturing knowledge and know-how, would allow MIND to bring solutions that support digital transformation. We will leverage their experience with our solutions to help manufacturers build value with QAD Adaptive ERP in the cloud.

"We are happy to be partnering with QAD as one of their trusted partners in the APAC and EMEA regions," said MIND CEO Rajesh Thakur. "We believe that our two-decade long experience of providing IT services to Motherson Group and proven success in the regions will help accelerate the deliverance of QAD's comprehensive portfolio of agile, cloud-based ERP solutions to our customers."

QAD partners expand the QAD ecosystem and strengthen its strategic position in the industries that it serves. QAD and its partners continuously evolve, broadening QAD's expertise and footprint to meet the diverse needs of customers around the world. The QAD Global Partner Network includes over 100 partners including technology, software, channel and consulting partners.

About QAD Enabling the Adaptive Manufacturing Enterprise

QAD Inc. is a leading provider of next-generation manufacturing and supply chain solutions in the cloud. Global manufacturers face ever-increasing disruption caused by technology-driven innovation and changing consumer preferences. In order to survive and thrive, manufacturers must be able to innovate and change business models at unprecedented rates of speed. QAD calls these companies Adaptive Manufacturing Enterprises. QAD solutions help customers in the automotive, life sciences, consumer products, food and beverage, high tech and industrial manufacturing industries rapidly adapt to change and innovate for competitive advantage.

Founded in 1979 and headquartered in Santa Barbara, California, QAD has 30 offices globally. Over 2,000 manufacturing companies have deployed QAD solutions, including enterprise resource planning (ERP), digital supply chain planning (DSCP), global trade and transportation execution (GTTE), quality management system (QMS) and strategic sourcing and supplier management, to become an Adaptive Manufacturing Enterprise. To learn more, visit http://www.qad.com or call +1 805-566-6100. Find us on Twitter, LinkedIn, Facebook, Instagram and Pinterest.

"QAD" is a registered trademark of QAD Inc. All other products or company names herein may be trademarks of their respective owners.

About MothersonSumi INfotech & Designs Limited (MIND)

Founded in 2000, MothersonSumi INfotech & Designs Limited (MIND) is a joint venture between Motherson Group, India and Sumitomo Wiring Systems Ltd, Japan (SWS). We are a trusted technology partner to over 200+ clients globally across 41+ Global locations and have more than 20 years of experience in the areas of cloud, IoT, analytics, data science, smart ERP, infra managed services, and application development & maintenance services. We continue to deliver innovative and meaningful technology solutions to businesses enabling them to outpace the competition. Visit us at http://www.mind-infotech.com, or connect with us on LinkedIn, Twitter, or Facebook.

Here is the original post:
QAD Partners with MothersonSumi INfotech & Designs Limited (MIND) to Sell and Deliver Services - Business Wire

Read More..

The Enformer vs the Basenji – The AI Algorithms for gene expression predictions – Analytics India Magazine

DeepMind and Alphabet at Calico introduced a neural network architecture called Enformer that greatly improved the accuracy of predicting gene expression based on DNA sequence.

In the paper Effective gene expression prediction from sequence by integrating long-range interactions published in Nature Methods, DeepMind suggested that Enformer is more accurate than Basenji.

The basic building blocks of gene expression have typically been convolutional neural networks. They have, however, been limited in their ability and effectiveness to model due to the effects of distal enhancers on gene expression.

So Deepmind depends on Basenji2, built on TensorFlow, which offers a variety of benefits, including distributed computing, a large and adaptive developer community, and is designed to predict quantitative signals using regression loss functions, rather than binary signals using classification loss functions.

The best part of Basenji is that it could predict the regulatory activity of 40,000 base pair DNA sequences at a time.

Enformer, on the other hand, relies on a technique common to natural language processing from Google called Transformers to take into account self-attention mechanisms that would be able to integrate much more DNA context. As Transformers can read long text passages, DeepMind modified them to read DNA sequences of vastly extended length.

Enformer outperformed the best team on the critical assessment of genome interpretation challenge (CAGI5) for noncoding variant interpretation despite no additional training. Furthermore, Enformer learned to predict promoter-enhancer interactions directly from DNA sequences, competing with methods that took direct experimental data as input.

In the case of training, DeepMind used Sonnet to construct neural networks used for many different purposes. It is defined in enformer.py.

DeepMind pre-computed variant effect scores for all frequent variants (MAF>0.5%, in any population) and stored them in HDF5 files per chromosome for the HG19 reference genome under the 1000 genomes project. Additionally, they provide the top 20 principal components of variant-effect scores per chromosome in a tabix-indexed TSV file (HG19 reference genome). These files have the following columns:

Hopefully, these advances will enable better mapping of growing human disease associations to cell-type-specific gene regulatory mechanisms and provide a framework to understand how cis-regulatory evolution works.

Read more here:
The Enformer vs the Basenji - The AI Algorithms for gene expression predictions - Analytics India Magazine

Read More..

LaChanze on Alice Childresss "Trouble in Mind" – The New York Times

As a student and young actor, I was astonished that the canon of Black American writers and artists that so richly shaped my artistic life were mostly unknown and so poorly understood. The plays director, Charles Randolph-Wright, the first Black director with whom I have worked as a leading actor on Broadway, shepherded this project for 15 years. He also read the play in college and fell in love with Childresss unapologetic writing.

He is the champion of Trouble in Mind. Charles, who studied at Duke University and with the Royal Shakespeare Company in London, and danced with Alvin Ailey in New York, was told many times that he could not make this happen. It is as if, with her words in the play, Childress wrote directly to Charles six decades ago, Im sick of people signifyin we got no sense. Charles wants to give her the voice she should have had before he and I were born.

In our many conversations, I am invigorated in speaking to him about Black representation in the entertainment industry. Working with a director who I feel lives in my head is thrilling. My private thoughts that Im sometimes too shy to share, Charles boldly speaks them before I can even get them out. Much like Childress, Charles is committed to telling the truth in his work and in having multidimensional portrayals of Black people, not just the broad strokes we see. And quite frankly, were both tired of seeing these examples. In my own career, Ive taken jobs I didnt want to do, but I had to play these parts because I needed a job.

I get to work with a dedicated, resilient Black director, and a fearless, committed cast. Childress wanted to speak for the have-nots, the invisibles, and to share her eloquence with the Broadway community and universities across the world. She used her play about Black actors to explore the values of America. But some people werent ready, and so many people never got to hear her words. Now I proudly stand on her shoulders, opening my soul to her and teaching my daughters and other lovers of truth about her brilliance.

Some live by what they call great truths, Wiletta says in the play. Ive always wanted to do somethin real grand in the theater to stand forth at my best to stand up here and do anything I want

And thats exactly what Alice Childress did.

LaChanze won the Tony Award for best actress in a leading role in a musical in 2006 for The Color Purple. In 2019, LaChanze and her eldest daughter, Celia Rose Gooding, became one of the few pairs of mothers and daughters to perform on Broadway as leading actors in the same season.

Read the original post:
LaChanze on Alice Childresss "Trouble in Mind" - The New York Times

Read More..

Gal Gadot officially regrets the cursed Imagine video – i-D

When the coronavirus pandemic and its subsequent lockdowns first took over the world nearly two years ago it birthed a multitude of oddities, from awkward enforced weekly Zoom quizzes to gifting health workers non-refundable clapping from our front doorsteps. But while the majority of us became sourdough bread specialists, the most deranged reactions came from celebrities, who took to social media to accidentally advocate for eugenics (Vanessa Hudgens) or cry about a delay to their album release (Dua Lipa). But the prize for the most cringe post of the early pandemic era belongs to, of course, the Gal Gadot-led Imagine video.

Spearheaded by the Wonder Woman actress and Bridesmaids Kristen Wiig and featuring a smorgasbord of stars Jamie Dornan, Natalie Portman, Ashley Benson, Kaia Gerber, Cara Delevingne, Zo Kravitz those benevolent millionaires each hoped to cheer the world with an awkwardly-sung line of the John Lennon song from the comfort of their Selling Sunset-style homes. Naturally, the video was slammed across social media; now, almost 2 years later, Gal has finally admitted that the video might have been a mistake.

Speaking to InStyle, after recently parodying the video during her acceptance speech at the Elle Women in Hollywood Awards, Gal said: I was calling Kristen [Wiig] and I was like, "Listen, I want to do this thing." The pandemic was in Europe and Israel before it came [to the US] in the same way. I was seeing where everything was headed. But [the video] was premature. It wasn't the right timing, and it wasn't the right thing. It was in poor taste. All pure intentions, but sometimes you don't hit the bull's-eye, right?

In truth, looking back since Gal posted it on Instagram in March 2020, the cursed video has only aged worse to the extent that, in a way, its almost swung back round to camp. In a little intro Gal sighs as if she hasnt seen another human soul in months, when in fact shes in day six of quarantine. She then says the past less-than-a-week-of-isolation has got her feeling philosophical. The deep mind-blowing philosophical realisation? The virus affects everyone. Between Sia over-singing the hell out of her two lines to Mark Ruffalos struggle to figure out his selfie camera angles, the video is essentially the equivalent of those 1 like = 1 prayer posts your aunt still shares on Facebook.

Though the intention behind the video may have been sincere, watching the ridiculously rich act like the pandemic had put us all in the exact same situation and that the only things they had to contribute were vibes and positive energy felt in pretty poor taste. Especially when other celebs, like our queen Britney, were offering struggling fans money. As i-D editor Risn Lanigan wrote at the time: Rather than rushing to push content into the world which doesnt actually help anyone, it might first be best to take some time and consider how your platform, and your millions of dollars of income, could be put to better use.

Follow i-D onInstagramandTikTokfor more news.

More here:
Gal Gadot officially regrets the cursed Imagine video - i-D

Read More..