Page 3,437«..1020..3,4363,4373,4383,439..3,4503,460..»

PhD Research Fellowship in Machine Learning for Cognitive Power Management job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY – NTNU | 219138 -…

About the position

This is a researcher training position aimed at providing promising researcher recruits the opportunity of academic development in the form of a doctoral degree.

Bringing intelligence into Internet-of-Things systems is mostly constrained by the availability of energy. Devices need to be wireless and small in size to be economically feasible and need to replenish their energy buffers using energy harvesting. In addition, devices need to work autonomously, because it is unfeasible to operate them manually or change batteriesthere's simply too many of them. To make the best of the energy available, IoT devices should plan wisely how they spend their energy, that means, which tasks they should perform and when. This requires the development of policies. Due to the different situations, the various devices may find themselves in, it will also vary from device to device which policies are best, which suggests the use of machine learning for the autonomous, individual development of energy policies for IoT devices.

One special focus in this project is the modeling of the power supply of the IoT devices, that means, the submodule that combines energy harvesting and energy buffering. Both are processes that are highly stochastic and probabilistic and vary over time and with the age of the device yet have major impact on a devices ability to perform well. In addition, due to the constraints, the approach itself must be computationally feasible and not itself consume too much energy. Combining machine learning for power supplies with the application goals of the IoT device is therefore a research challenge.

You will reportto the Head of Department.

Duties of the position

Within this project, we will design and validate machine-learning approaches to model power supplies to know more about their current and future state, and energy budget policies that allow IoT devices to perform better and autonomously. The project is cross-disciplinary involving electronic design, software and statistical techniques and machine learning. Depending on the skills of the candidate, different aspect may be emphasized, for instance focusing on statistical modelling of relevant effects, transfer learning and model identification, and explainability of machine learning models. Experience with electronics may be beneficial but are not strictly required.

The research will be carried out in an interdisciplinary environment of several research groups, and under guidance of three supervisors,

The research environments include

Required selection criteria

The PhD-position's main objective is to qualify for work in research positions. The qualification requirement is that you have completed a masters degree or second degree (equivalent to 120 credits) with a strong academic background in computer science, statistical machine learning, applied mathematics, communication- and information technology, electrical engineering, electronic engineering, or an equivalent education with a grade of B or better in terms ofNTNUs grading scale. If you do not have letter grades from previous studies, you must have an equally good academic foundation. If you are unable to meet these criteria you may be considered only if you can document that you are particularly suitable for education leading to a PhD degree.

The appointment is to be made in accordance with the regulations in force concerningState Employees and Civil Servants and national guidelines for appointment as PhD, post doctor and research assistant.

Preferred selection criteria

Personal characteristics

In the evaluation of which candidate is best qualified, emphasis will be placed on education, experience and personal suitability, in terms of the qualification requirements specified in the advertisement.

We offer

Salary and condition

PhD candidate:

PhD candidates are remunerated in code 1017, and are normally remunerated at gross from NOK 479 600 per annum, depending on qualifications and seniority. From the salary, 2% is deducted as a contribution to the Norwegian Public Service Pension Fund.

The period of employment is 4 years including 25% of teaching assistance. Students at NTNU can also apply for this position as part of an integrated PhD program (https://www.ntnu.edu/iik/integrated-phd).

Appointment to a PhD position requires that you are admitted to the PhD programme in Information Security and Communication Technologywithin three months of employment, and that you participate in an organized PhD programme during the employment period.

The engagement is to be made in accordance with the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criteria in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.

It is a prerequisite you can be present at and accessible to the institution daily.

About the application

The application and supporting documentation to be used as the basis for the assessment must be in English.

Publications and other scientific work must follow the application. Please note that applications are only evaluated based on the information available on the application deadline. You should ensure that your application shows clearly how your skills and experience meet the criteria which are set out above.

The application must contain:

Joint works will be considered. If it is difficult to identify your contribution to joint works, you must attach a brief description of your participation.

NTNU is committed to following evaluation criteria for research quality according toThe San Francisco Declaration on Research Assessment - DORA.

General information

Working at NTNU

A good work environment is characterized by diversity. We encourage qualified candidates to apply, regardless of their gender, functional capacity or cultural background.

The city of Trondheimis a modern European city with a rich cultural scene. Trondheim is the innovation capital of Norway with a population of 200,000. The Norwegian welfare state, including healthcare, schools, kindergartens and overall equality, is probably the best of its kind in the world. Professional subsidized day-care for children is easily available. Furthermore, Trondheim offers great opportunities for education (including international schools) and possibilities to enjoy nature, culture and family life and has low crime rates and clean air quality.

As an employeeatNTNU, you must at all times adhere to the changes that the development in the subject entails and the organizational changes that are adopted.

Information Act (Offentleglova), your name, age, position and municipality may be made public even if you have requested not to have your name entered on the list of applicants.

Questions about the position can be directed to Frank Alexander Kraemer, via kraemer@ntnu.no

Please submit your application electronically via jobbnorge.no with your CV, diplomas and certificates. Applications submitted elsewhere will not be considered. Diploma Supplement is required to attach for European Master Diplomas outside Norway.

Chinese applicants are required to provide confirmation of Master Diploma fromChina Credentials Verification (CHSI).

Pakistani applicants are required to provide information of Master Diploma from Higher Education Commission (HEC) https://hec.gov.pk/english/pages/home.aspx

Applicants with degrees from Cameroon, Canada, Ethiopia, Eritrea, Ghana, Nigeria, Philippines, Sudan, Uganda and USA have to send their education documents as paper copy directly from the university college/university, in addition to enclose a copy with the application.

Application deadline: 13.09.2020

NTNU - knowledge for a better world

The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.

Department of Information Security and Communication Technology

Research is vital to the security of our society. We teach and conduct research in cyber security, information security, communications networks and networked services. Our areas of expertise include biometrics, cyber defence, cryptography, digital forensics, security in e-health and welfare technology, intelligent transportation systems and malware. The Department of Information Security and Communication Technology is one of seven departments in theFaculty of Information Technology and Electrical Engineering

Deadline13th September 2020EmployerNTNU - Norwegian University of Science and TechnologyMunicipalityTrondheimScopeFulltimeDurationTemporaryPlace of serviceNTNU Campus Glshaugen

Visit link:
PhD Research Fellowship in Machine Learning for Cognitive Power Management job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY - NTNU | 219138 -...

Read More..

Machine learning is pivotal to every line of business, every organisation must have an ML strategy – BusinessLine

Swami Sivasubramanian, Vice-President, Amazon Machine Learning, AWS (Amazon Web Services), who leads a global AI/ML team, has built more than 30 AWS services, authored around 40 referred scientific papers and been awarded over 200 patents. He was also one of the primary authors for a paper titled, Dynamo: Amazons Highly Available Key-value Store, along with AWS CTO and VP, Werner Vogels, which received the ACM Hall of Fame award. In a conversation with BusinessLine from Seattle, Swami said people always assume AI and ML are futuristic technologies, but the fact is AI and ML are already here and it is happening all around us.excerpts:

Bengaluru, August 12

The popular use cases for AI/ML are predominantly in logistics, customer experience and e-commerce. What AI/ML use cases are likely to emerge in the post-Covid-19 environment?

We dont have to wait for post-Covid-19, were seeing this right now. Artificial Intelligence (AI) and Machine Learning (ML) are playing a key role in better understanding and addressing the Covid-19 crisis. In the fight against Covid-19, organisations have been quick to apply their machine learning expertise in several areas, including, scaling customer communications, understanding how Covid-19 spreads, and speeding up research and treatment. Were seeing adoption of AI/ML across all industries, verticals and sizes of business. We expect this to not only continue, but accelerate in the future.

Of AWSs 175+ services portfolio, how many are AI/ML services?

We dont break out that number, but what I can tell you is AWS offers the broadest and deepest set of machine learning services and supporting cloud infrastructure putting machine learning in the hands of every developer, data scientist and expert practitioner.

Then why has AWS not featured in Gartners Data Science and ML Platforms Magic Quadrant?

Gartner's inclusion criteria explicitly excluded providers who focus primarily on developers. However, the Cloud AI Developer Services Magic Quadrant does cite us as a leader. Also, the recently released Gartner Solution Scorecard, which evaluated our capabilities in the Data Science and Machine Learning space, scored Amazon SageMaker higher than offerings from the other major providers.

Where is India positioned on the AI/ML adoption curve compared to developed economies?

I think, India is in a really good place. I remember visiting some of our customers and start-ups in India, there is so much innovation happening in India. I happen to believe that transformation comes because at a ground level, developers start adopting technologies and this is one of those things where I think India, especially at a ground level when it comes to the start-up ecosystem, have been jumping into in a big way to adopt machine learning technology.

For example, machine learning is embedded in every aspect of what Freshworks, a B2B unicorn in India, is doing. In fact, they build something like 33,000 models and they are iterating and theyre trying to build ML models, again using some of our technologies like Amazon SageMaker. Theyve cut down from eight weeks to less than one week. redBus, which Im a big fan of as I travel back and forth between Chennai to Bengaluru, is also using some of our ML technologies and their productivity has increased. One of the key things we need to be cognizant of is that machine learning technology is not going to get mainstream adoption if people are just using it for extremely leading-edge use cases. It should be used in everyday use cases. I think even in India now, it is starting to get into mainstream use cases in a big and meaningful way. For instance, Dish TV uses AWS Elemental, our video processing service to process video content and then they feed it into Amazon Rekognition to flag inappropriate content. There are start-ups like CreditVidya, who are building an ML platform on AWS to analyze behavioural data of customers and make better recommendations.

The greater the adoption of AI/ML, the more job losses are likely as organisations fire people to induct skilled talent. Please comment.

One thing is for sure, there is change coming and technology is driving it. Im very optimistic about the future. I remember the days where there used to be manual switching of telephones, but then we moved to automated switching. Its not like those jobs went away. All these people re-educated themselves and they are actually doing more interesting, more challenging jobs. Lifelong education is going to be critical. In Amazon, my team, for instance, runs Machine Learning University. We train our own engineers and Amazon Associates on various opportunities and expose them to leading-edge technology such as machine learning. Now, we are actually making this available for free as part of the AWS Training and Certification programs. In November 2018 we made it free, and within the first 48 hours of us making this free, we had more than one lakh people registered to learn. So, there is a huge appetite for it. In 2012, we decided, every organisation within Amazon had to have a machine learning strategy, even when machine learning was not even actually considered cool. So Jeff and the leadership team said, machine learning is going to be such a pivotal thing for every line of business irrespective of whether they run cloud computing or supply chain or financial technology data, and we required every business group in their yearly planning, to include how they were going to leverage machine learning in their business. And no, we do not plan to was not considered an acceptable answer.

What AI/ML tools do AWS offer, and for whom?

The vast majority of ML being done in the cloud today is on AWS. With an extensive portfolio of services at all three layers of the technology stack, more customers reference using AWS for machine learning than any other provider. AWS released more than 250 machine learning features and capabilities in 2019, with tens of thousands of customers using the services, spurred by the broad adoption of Amazon SageMaker since AWS re:Invent 2017. Our customers include, American Heart Association, Cathay Pacific, Dow Jones, Expedia.com, Formula 1, GE Healthcare, UKs National Health Service, NASA JPL, Slack, Tinder, Twilio, United Nations, the World Bank, Ryanair, and Samsung, among others.

Our AI/ML services are meant for: Advanced developers and scientists who are comfortable building, tuning, training, deploying, and managing models themselves, AWS offers P2 and P3 instances at the bottom of the stack which provide up to six times better performance than any other GPU instances available in the cloud today together with AWSs deep learning AMI (Amazon Machine Image) that embeds all the major frameworks. And, unlike other providers who try to funnel everybody into using only one framework, AWS supports all the major frameworks because different frameworks are better for different types of workloads.

At the middle layer of the stack, organisations that want to use machine learning in an expansive way can leverage Amazon SageMaker, a fully managed service that removes the heavy lifting, complexity, and guesswork from each step of the ML process, empowering everyday developers and scientists to successfully use ML. SageMaker is a sea-level change for everyday developers being able to access and build machine learning models. Its kind of incredible, in just a few months, how many thousands of developers started building machine learning models on top of AWS with SageMaker.

At the top layer of the stack, AWS provides solutions, such as Amazon Rekognition for deep-learning-based video and image analysis, Amazon Polly for translating text to speech, Amazon Lex for building conversations, Amazon Transcribe for converting speech to text, Amazon Translate for translating text between languages, and Amazon Comprehend for understanding relationships and finding insights within text. Along with this broad range of services and devices, customers are working alongside Amazons expert data scientists in the Amazon Machine Learning Solutions Lab to implement real-life use cases. We have a pretty giant investment in all layers of the machine learning stack and we believe that most companies, over time, will use multiple layers of that stack and have applications that are infused with ML.

Why would customers opt for AWSs AI/ML services versus competitor offerings from Microsoft, Google?

At Amazon, we always approach everything we do by focusing on our customers. We have thousands of engineers at Amazon committed to ML and deep learning, and its a big part of our heritage. Within AWS, weve been focused on bringing that knowledge and capability to our customers by putting ML into the hands of every developer and data scientist. But we do take a different approach to ML than others may we know that the only constant within the history of ML is change. Thats why we will always provide a great solution for all the frameworks and choices that people want to make by providing all of the major solutions so that developers have the right tool for the right job. And our customers are responding! Today, the vast majority of ML and deep learning in the cloud is running on AWS, with meaningfully more customer references for machine learning than any other provider. In fact, 85 per cent of TensorFlow being run in the cloud, is run on AWS.

See original here:
Machine learning is pivotal to every line of business, every organisation must have an ML strategy - BusinessLine

Read More..

CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning – Yahoo Finance

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Please replace the release with the following corrected version due to multiple revisions.

The updated release reads:

ANYSCALE HOSTS INAUGURAL RAY SUMMIT ON SCALABLE PYTHON AND SCALABLE MACHINE LEARNING

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Anyscale, the distributed programming platform company, is proud to announce Ray Summit, an industry conference dedicated to the use of the Ray open source framework for overcoming challenges in distributed computing at scale. The two-day virtual event is scheduled for Sept. 30 Oct. 1, 2020.

With the power of Ray, developers can build applications and easily scale them from a laptop to a cluster, eliminating the need for in-house distributed computing expertise. Ray Summit brings together a leading community of architects, machine learning engineers, researchers, and developers building the next generation of scalable, distributed, high-performance Python and machine learning applications. Experts from organizations including Google, Amazon, Microsoft, Morgan Stanley, and more will showcase Ray best practices, real-world case studies, and the latest research in AI and other scalable systems built on Ray.

"Ray Summit gives individuals and organizations the opportunity to share expertise and learn from the brightest minds in the industry about leveraging Ray to simplify distributed computing," said Robert Nishihara, Ray co-creator and Anyscale co-founder and CEO. "Its also the perfect opportunity to build on Rays established popularity in the open source community and celebrate achievements in innovation with Ray."

Anyscale will announce the v1.0 release of the Ray open source framework at the Summit and unveil new additions to a growing list of popular third-party machine learning libraries and frameworks on top of Ray.

The Summit will feature keynote presentations, general sessions, and tutorials suited to attendees with various experience and skill levels using Ray. Attendees will learn the basics of using Ray to scale Python applications and machine learning applications from machine learning visionaries and experts including:

"It is essential to provide our customers with an enterprise grade platform as they build out intelligent autonomous systems applications," said Mark Hammond, GM Autonomous Systems, Microsoft. "Microsoft Project Bonsai leverages Ray and Azure to provide transparent scaling for both reinforcement learning training and professional simulation workloads, so our customers can focus on the machine teaching needed to build their sophisticated, real world applications. Im happy we will be able to share more on this at the inaugural Anyscale Ray Summit."

To view the full event schedule, please visit: https://events.linuxfoundation.org/ray-summit/program/schedule/

For complimentary registration to Ray Summit, please visit: https://events.linuxfoundation.org/ray-summit/register/

About Anyscale

Anyscale is the future of distributed computing. Founded by the creators of Ray, an open source project from the UC Berkeley RISELab, Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center. Anyscale empowers organizations to bring AI applications to production faster, reduce development costs, and eliminate the need for in-house expertise to build, deploy and manage these applications. Backed by Andreessen Horowitz, Anyscale is based in Berkeley, CA. http://www.anyscale.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200812005122/en/

Contacts

Media Contact:Allison Stokesfama PR for Anyscaleanyscale@famapr.com 617-986-5010

View original post here:
CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning - Yahoo Finance

Read More..

Why GPT-3 Heralds a Democratic Revolution in Tech – Built In

GPT-3, a machine learning model from OpenAI, has taken the world by storm over the last couple of weeks. Natural language generation, a branch of computer science focused on creating texts from a batch of data, entered a golden age with last years release of GPT-2. The release of GPT-3 last month only confirmed this. In this article, I want to take a look at why GPT-3 is such a huge deal for the machine learning community, for entrepreneurs, and for anyone working with technology.

GPT-3 is a 175 billion parameters Transformer deep learning model. That might sound complicated, but it boils down to an algorithm that was taught to predict the next word based on the sentence you input. After you provide a sentence and the algorithm fills in the gaps. For example, you could put in How to successfully use content marketing? and you would get a text on the subject of content marketing.

GPT stands for Generative Pre-Training. The generative part of that term should be clear. You want the model to generate a text for you based on some input. Pre-Training refers to the fact that the model was trained on a massive corpus of text and its knowledge of language comes from the examples it has seen before. It doesnt copy fragments of texts verbatim, however. The process involves randomness due to the fact that the model tries to predict the next word based on what came before, and this prediction has a statistical component to it. All this also means that GPT-3 doesnt truly understand the language its processing; it cant make logical inferences like a human can, for instance.

GPT-3 doesnt feature a real breakthrough on the algorithmic side. Its more of the same as GPT-2, although it was trained with substantially more data and more computing power. OpenAI used a C4 (Common Crawl) dataset from Google, which Google used in training their T5 model.

So why is GPT-3 amazing at all? Its transformative nature all boils down to its applications, which is where we can really measure its robustness.

Imagine you want to build a model for translation from English to French. Youd take a pre-trained language model (say BERT) and then feed an English word or sentence into it as date along with a paired translation. GPT-3 can perform this task and many others without any additional learning, whereas youd need to fine-tune earlier machine learning models like BERT on each task. You simply provide a prompt (asking sentence or phrase):

Translate English to French: cheese =>to getfromage

Providing a command without additional training is what we call zero-shot learning. You gave no prior examples of what you wanted the algorithm to achieve, yet it understood that you wanted to make a translation. You could, for example, give Summarize as an input and provide a text that you wanted a synopsis of. GPT-3 will understand that you want a summary of the text without any additional fine-tuning or more data.

In general, GPT-3 is a few-shot learner, which means that you simply need to describe to it a couple of examples of what you want, and then it can figure out the rest. The most surprising applications of this include various human-to-machine interfaces, where you write in simple English and get a code in HTML, SQL, Python, or app design in Figma.

For example, this GPT-3 powered app lets you write How many users have signed up since the start of 2020? The app would then give you an SQL code: SELECT count(id) FROM users WHERE created_at > 2020-01-01 that does just that. In other words, GPT-3 allows you to make queries about spreadsheets using natural language English in this case.

Another great GPT-3 powered app lets you describe a design you want in simple English (Make a yellow Registration button) and get Figma files with the button ready to be implemented in your app or website.

There are plenty of other examples that feature GPT-3 translating from English to a coding language, making the interaction between humans and machines much easier and faster. And thats why GPT-3 is truly groundbreaking. It points us towards new, different human-machine interfaces.

So what does GPT-3 offer entrepreneurs, developers, and all the rest of us? Simplicity and the increasing democratization of technology.

GPT-3 and similar generative models wont replace developers or designers soon, but they will allow for wider access to technology, be that designing new apps, websites, or researching and writing about various topics. Non-technical people wont have to rely on developers to start playing around with their ideas or even build an MVP. They can simply describe it in English as they would to a software house to get what they want. This could well drive down the costs of entrepreneurship as you would no longer need developers to start.

What does that mean to developers, though? Will they become obsolete? Not at all. Instead,they will move higher up the stack. Their primary job is to communicate with the machine to make it do the things that the developer wants. With GPT-3 and similar generative models, that process will happen much faster. New programming languages emerge all the time for a reason: to make programming certain tasks easier and smoother. Generative language models can help build a new generation of programming languages which will power up developers to do incredible things much faster.

All in all, the impact of GPT-3 over the next five years is likely to be increasingly democratized technology. These tools will become cheaper and more accessible to anyone, just like the widespread access to the Internet did 20 years ago.

Regardless of the exact form it takes, with GPT-3, the future of technology definitely looksexciting.

P.S. If you want to test models similar to GPT-3 right now for yourself, visit Contentyze, a content generation platform Im building with my team.

Read the original here:
Why GPT-3 Heralds a Democratic Revolution in Tech - Built In

Read More..

BMW, Red Hat, and Malong Share Insights on AI and Machine Learning During Transform 2020 – ENGINEERING.com

BMW, Red Hat, and Malong Share Insights on AI and Machine Learning During Transform 2020Denrie Caila Perez posted on August 07, 2020 | Executives from BMW, Red Hat and Malong discuss how AI is transforming manufacturing and retail.

(From left to right) Maribel Lopez of Lopez Research, Jered Floyd of Red Hat, Jimmy Nassif of BMW Group, and Matt Scott of Malong Technologies.

The VentureBeat 2020 conference welcomed the likes of BMW Groups Jimmy Nassif, Red Hats Jered Floyd, and Malong CEO Matt Scott, who shared their insights on challenges with AI in their respective industries. Nassif, who deals primarily with robotics, and Floyd, who works in retail, both agreed that edge computing and the Internet of Things (IoT) has become powerful in accelerating production while introducing new capabilities in operations. According to Nassif, BMWs car sales have already doubled over the past decade, with 2.5 million in 2019. With over 4,500 suppliers dealing 203,000 unique parts, logistics problems are bound to occur. In addition to that, approximately 99 percent of orders are unique, which means there are over 100 end-customer options.

Thanks to platforms such as NVIDIAs Isaac, Jetson AXG Xavier, and DGX, BMW was able to come up with five navigation and manipulation robots that transport and manage parts around its warehouses. Two of the robots have already been deployed to four facilities in Germany. Using computer vision techniques, the robots are able to successfully identify parts, as well as people and potential obstacles. According to BMW, the algorithms are also constantly being optimized using NVIDIAs Omniverse simulator, which BMW engineers can access anytime from any of their global facilities.

In contrast, Malong uses machine learning in a totally different playing fieldself-checkout stations in retail locations. Overhead cameras are able to feed images of products as they pass the scanning bed to algorithms capable of detecting mis-scans. This includes mishaps such as occluded barcodes, products left in shopping carts, dissimilar products, and even ticket switching, which is when a products barcode is literally switched with that of a cheaper product.

These algorithms also run on NVIDIA hardware and are trained with minimal supervision, allowing them to learn and identify products using various video feeds on their own. According to Scott, edge computing is particularly significant in this area due to the necessity of storing closed-circuit footage via the cloud. Not only that, but it enables easier scalability to thousands of stores in the long term.

Making an AI system scalable is very different from making it run, he explained. Thats sometimes a mirage that happens when people are starting to play with these technologies.

Floyd also stressed how significant open platforms are when playing with AI and edge computing technology. With open source, everyone can bring their best technologies forward. Everyone can come with the technologies they want to integrate and be able to immediately plug them into this enormous ecosystem of AI components and rapidly connect them to applications, he said.

Malong has been working with Open Data Hub, a platform that allows for end-to-end AI and is designed for engineers to conceptualize AI solutions without needing complicated and costly machine learning workflows. In fact, its the very foundation of Red Hats data science software development stack.

All three companies are looking forward to more innovation in applications and new technologies.

Visit VentureBeats website for more information on Transform 2020. You can also watch the Transform 2020 sessions on demand here.

For more news and stories, check out how a machine learning system detects manufacturing defects using photos here.

Read more:
BMW, Red Hat, and Malong Share Insights on AI and Machine Learning During Transform 2020 - ENGINEERING.com

Read More..

Algorithm created by deep learning finds potential therapeutic targets throughout the human genome – National Science Foundation

Researchers identified sites of methylation that could not be found with existing sequencing methods

Representation of a DNA molecule that is methylated. The two white spheres are methyl groups.

August 13, 2020

Researchers at the New Jersey Institute of Technology and the Children's Hospital of Philadelphia have developed an algorithm through machine learning that helps predict sites of DNA methylation -- a process that can change the activity of DNA without changing its overall structure. The algorithm can identify disease-causing mechanisms that would otherwise be missed by conventional screening methods.

DNA methylation is involved in many key cellular processes and is an important component in gene expression. Errors in methylation are linked with a variety of human diseases.

The computationally intensive research was accomplished on supercomputers supported by the U.S. National Science Foundation through the XSEDE project, which coordinates nationwide researcher access. The results were published in the journal Nature Machine Intelligence.

Genomic sequencing tools are unable to capture the effects of methylation because the individual genes still look the same.

"Previously, methods developed to identify methylation sites in the genome could only look at certain nucleotide lengths at a given time, so a large number of methylation sites were missed," said Hakon Hakonarson, director of the Center for Applied Genomics at Children's Hospital and a senior co-author of the study. "We needed a better way of identifying and predicting methylation sites with a tool that could identify these motifs throughout the genome that are potentially disease-causing."

Children's Hospital and its partners at the New Jersey Institute of Technology turned to deep learning. Zhi Wei, a computer scientist at NJIT and a senior co-author of the study, worked with Hakonarson and his team to develop a deep learning algorithm that could predict where sites of methylation are located, helping researchers determine possible effects on certain nearby genes.

"We are very pleased that NSF-supported artificial intelligence-focused computational capabilities contributed to advance this important research," said Amy Friedlander, acting director of NSF's Office of Advanced Cyberinfrastructure.

Originally posted here:
Algorithm created by deep learning finds potential therapeutic targets throughout the human genome - National Science Foundation

Read More..

Physicists watch quantum particles tunnel through solid barriers. Here’s what they found. – Space.com

The quantum world is a pretty wild one, where the seemingly impossible happens all the time: Teensy objects separated by miles are tied to one another, and particles can even be in two places at once. But one of the most perplexing quantum superpowers is the movement of particles through seemingly impenetrable barriers.

Now, a team of physicists has devised a simple way to measure the duration of this bizarre phenomenon, called quantum tunneling. And they figured out how long the tunneling takes from start to finish from the moment a particle enters the barrier, tunnels through and comes out the other side, they reported online July 22 in the journal Nature.

Quantum tunneling is a phenomenon where an atom or a subatomic particle can appear on the opposite side of a barrier that should be impossible for the particle to penetrate. It's as if you were walking and encountered a 10-foot-tall (3 meters) wall extending as far as the eye can see. Without a ladder or Spider-man climbing skills, the wall would make it impossible for you to continue.

Related: The 18 biggest unsolved mysteries in physics

However, in the quantum world, it is rare, but possible, for an atom or electron to simply "appear" on the other side, as if a tunnel had been dug through the wall. "Quantum tunneling is one of the most puzzling of quantum phenomena," said study co-author Aephraim Steinberg, co-director of the Quantum Information Science Program at Canadian Institute for Advanced Research. "And it is fantastic that we're now able to actually study it in this way."

Quantum tunneling is not new to physicists. It forms the basis of many modern technologies such as electronic chips, called tunnel diodes, which allow for the movement of electricity through a circuit in one direction but not the other. Scanning tunneling microscopes (STM) also use tunneling to literally show individual atoms on the surface of a solid. Shortly after the first STM was invented, researchers at IBM reported using the device to spell out the letters IBM using 35 xenon atoms on a nickel substrate.

While the laws of quantum mechanics allow for quantum tunneling, researchers still don't know exactly what happens while a subatomic particle is undergoing the tunneling process. Indeed, some researchers thought that the particle appears instantaneously on the other side of the barrier as if it instantaneously teleported there, Sci-News.com reported.

Researchers had previously tried to measure the amount of time it takes for tunneling to occur, with varying results. One of the difficulties in earlier versions of this type of experiment is identifying the moment tunneling starts and stops. To simplify the methodology, the researchers used magnets to create a new kind of "clock" that would tick only while the particle was tunneling.

Subatomic particles all have magnetic properties and when magnets are in an external magnetic field, they rotate like a spinning top. The amount of rotation (also called precession) depends on how long the particle is bathed in that magnetic field. Knowing that, the Toronto group used a magnetic field to form their barrier. When particles are inside the barrier, they precess. Outside it, they don't. So measuring how long the particles precess told the researchers how long those atoms took to tunnel through the barrier.

Related: 18 times quantum particles blew our minds

"The experiment is a breathtaking technical achievement," said Drew Alton, physics professor at Augustana University, in South Dakota.

The researchers prepared approximately 8,000 rubidium atoms, cooled them to a billionth of a degree above absolute zero. The atoms needed to be this temperature, otherwise they would have moved around randomly at high speeds, rather than staying in a small clump. The scientists used a laser to create the magnetic barrier; they focused the laser so that the barrier was 1.3 micrometers (microns) thick, or the thickness of about 2,500 rubidium atoms. (So if you were a foot thick, front to back, this barrier would be the equivalent of about half a mile thick.) Using another laser, the scientists nudged the rubidium atoms toward the barrier, moving them about 0.15 inches per second (4 millimeters/s).

As expected, most of the rubidium atoms bounced off the barrier. However, due to quantum tunneling, about 3% of the atoms penetrated the barrier and appeared on the other side. Based on the precession of those atoms, it took them about 0.6 milliseconds to traverse the barrier.

Chad Orzel, an associate professor of physics at Union College in New York, who was not part of the study, applauded the experiment, "Their experiment is ingeniously constructed to make it difficult to interpret as anything other than what they say," said Orzel, author of "How to Teach Quantum Mechanics to Your Dog" (Scribner, 2010) It "is one of the best examples you'll see of a thought experiment made real," he added.

Experiments exploring quantum tunneling are difficult and further research is needed to understand the implications of this study. The Toronto group is already considering improvements to their apparatus to not only determine the duration of the tunneling process, but to also see if they can learn anything about velocity of the atoms at different points inside the barrier. "We're working on a new measurement where we make the barrier thicker and then determine the amount of precession at different depths," Steinberg said. "It will be very interesting to see if the atoms' speed is constant or not."

In many interpretations of quantum mechanics, it is impossible even in principle to determine a subatomic particle's trajectory. Such a measurement could lead to insights into the confusing world of quantum theory. The quantum world is very different from the world we're familiar with. Experiments like these will help make it a little less mysterious.

Originally published on Live Science.

More here:

Physicists watch quantum particles tunnel through solid barriers. Here's what they found. - Space.com

Read More..

This is the way the universe ends: not with a whimper, but a bang – Science Magazine

An artists impression of a black dwarf, a cooled-down stellar remnant that could form in trillions of years

By Adam MannAug. 11, 2020 , 5:35 PM

In the unimaginably far future, cold stellar remnants known as black dwarfs will begin to explode in a spectacular series of supernovae, providing the final fireworks of all time. Thats the conclusion of a new study, which posits that the universe will experience one last hurrah before everything goes dark forever.

Astronomers have long contemplated the ultimate end of the cosmos. The known laws of physics suggest that by about 10100 (the No. 1 followed by 100 zeros) years from now, star birth will cease, galaxies will go dark, and even black holes will evaporate through a process known as Hawking radiation, leaving little more than simple subatomic particles and energy. The expansion of space will cool that energy nearly to 0 kelvin, or absolute zero, signaling the heat death of the universe and total entropy.

But while teaching an astrophysics class this spring, theoretical physicist Matt Caplan of Illinois State University realized the fate of one last group of entities had never been accounted for. After exhausting their thermonuclear fuel, low mass stars like the Sun dont pop off in dramatic supernovae; rather, they slowly shed their outer layers and leave behind a scorching Earth-size core known as a white dwarf.

They are essentially pans that have been taken off the stove, Caplan says. Theyre going to cool and cool and cool, basically forever.

White dwarfs crushing gravitational weight is counterbalanced by a force called electron degeneracy pressure. Squeeze electrons together, and the laws of quantum mechanics prevent them from occupying the same state, allowing them to push back and hold up the remnants mass.

The particles in a white dwarf stay locked in a crystalline lattice that radiates heat for trillions of years, far longer than the current age of the universe. But eventually, these relics cool off and become a black dwarf.

Because black dwarfs lack energy to drive nuclear reactions, little happens inside them. Fusion requires charged atomic nuclei to overcome a powerful electrostatic repulsion and merge. Yet over long time periods, quantum mechanics allows particles to tunnel through energetic barriers, meaningfusion can still occur, albeit at extremely low rates.

When atoms such as silicon and nickel fuse toward iron, they produce positrons, the antiparticle of an electron. These positrons would ever-so-slowly destroy some of the electrons in a black dwarfs center and weaken its degeneracy pressure. For stars between roughly 1.2 and 1.4 times the Suns massabout 1% of all stars in the universe todaythis weakening would eventually result in a catastrophic gravitational collapse that drives a colossal explosion similar to the supernovae of higher mass stars, Caplan reports this month in the Monthly Notices of the Royal Astronomical Society.

Caplan says the dramatic detonations will begin to occurabout 101100 years from now, a number the human brain can scarcely comprehend. The already unfathomable number 10100 is known as a googol, so 101100 would be a googol googol googol googol googol googol googol googol googol googol googol years. The explosions would continue until 1032000 years from now, which would require most of a magazine page to represent in a similar fashion.

A time traveler hoping to witness this last cosmic display would be disappointed. By the start of this era, the mysterious substance acting in opposition to gravity called dark energy will have driven everything in the universe apart so much that each individual black dwarf would be surrounded by vast darkness: The supernovae would even be unobservable to each another.

In fact, Caplan showed that the radius of the observable universe will have by then grown by about e10^1100 (where e is approximately 2.72), a figure immensely larger than either of those given above. This is the biggest number Im ever going to have to seriously work with in my career, he says.

Gregory Laughlin, an astrophysicist at Yale University, praises the research as a fun thought experiment. The value of contemplating these mind-boggling timescales is that they allow scientists to consider physical processes that havent had enough time to unfold in the current era, he says.

Still, I think its important to stress that any investigations of the far future are necessarily tongue in cheek, Laughlin says. Our view of the extremely distant future is a reflection of our current understanding, and that view will change from one year to the next.

For example, some of the grand unified theories of physics suggest the proton eventually will decay. This would dissolve Caplans black dwarfs long before they would explode. And some cosmological models have hypothesized that the universe could collapse back in on itself in a big crunch, precluding the final light show.

Caplan himself enjoys peering into the distant future. I think our awareness of our own mortality definitely motivates some fascination with the end of the universe, he says. You can always reassure yourself, when things go wrong, that it wont matter once entropy is maximized.

Go here to read the rest:

This is the way the universe ends: not with a whimper, but a bang - Science Magazine

Read More..

Here’s why we need to build a quantum security coalition – World Economic Forum

The power of quantum computers creates an unprecedented threat to the security of our data through its potential to break the cryptography that underpins our digital ecosystem. The technology community can address and manage this risk that has the potential to act as a strategic blocker to the wider adoption of Quantum technology; doing so will help unlock the trillion-dollar potential value of quantum technology to the global economy.

For all the dramatic advances they will offer, quantum computers could threaten our ability to encrypt information and exchange it securely. While this development has the potential for significant economic and geopolitical disruption, the technology to mitigate this risk exists today and it also presents a transformative opportunity to deliver a new level of digital trust and security.

What the world needs is a quantum security coalition, a global community of those who are committed to promoting the safe and secure adoption of new quantum applications, promoting better quantum literacy among global leaders, and accelerating a secure global ecosystem, including quantum security technology, that will be able to unlock the true value and potential of this technology securely.

Quantum science is now being harnessed to build a strong cybersecurity response to both a future as well as the current threat landscape. The resultant technologies can provide the basis for a new security foundation that will offer a step-change in our ability to secure our digital infrastructure but we need action now to incentivize their widespread adoption across the digital ecosystem.

Leveraging the laws of physics, quantum-enabled technologies, such as quantum key distribution and quantum random number generation, are not susceptible to attacks from either quantum computers or powerful mathematical techniques. As such they can provide robust and future-proof security and potentially a new paradigm of trust not currently available using traditional approaches.

These physics-based approaches, based on advanced cybersecurity software and next-generation cryptographic strategies (known as post-quantum algorithms), deliver resilient cybersecurity infrastructure capable of safeguarding our digital lives and connected societies today and into the future. Quantum-enabled technologies form the core of the quantum principles that can be employed to assure the security of digital communications. The following examples of potential applications will play a critical role in building trust in the digital ecosystem.:

1. Quantum key distribution technology uses quantum effects to protect the most critical and vulnerable link in the security chain: the exchange of encryption keys between parties. The diagram below illustrates a quantum key distribution system using an optical fibre-based channel to exchange key material, protected by the laws of quantum physics. Adaptations to other channels such as 'over-the-air' quantum key distribution are also maturing.

Quantum key distribution (QKD)

Image: Quintessence Labs

2. Quantum effects can also be harnessed to deliver high-speed streams of truly random (known as full entropy) bits, which can be used to construct high-quality encryption keys. By virtue of being truly random, and thus unpredictable, such keys are more secure. Devices capturing these quantum effects are now mature and are today being deployed in existing technology and infrastructure.

The importance of entropy in security is well illustrated by cautionary tales of what has happened when too much reliance has been placed on deterministic or algorithmic approaches to generating random numbers.

In 2017, Russian hackers cheated casinos out of millions of dollars by targeting weak (software-based) pseudo-random number generation algorithms in slot machines. They used smartphones to record the patterns of the spins of slot machine wheels and then reverse-engineered the underlying random number-generation algorithm. This enabled the hackers to predict the spins and monetize this predictability. As a consequence, the gaming industry has been one of the first to start realizing the potential power of quantum-enabled true random number generation.

The foundations of this new security paradigm are firmly in place; however more work is needed to drive broad adoption. This is a new technology, and within the security ecosystem progress is being made within the academic, innovation labs and specialist technical communities. But within the security field we see two main barriers that the wider community needs to address:

Barrier 1: Maturity and standards

While quantum entropy is a known, highly capable technology for generating encryption keys that is also ready for broad implementation, there still remain barriers to the deployment of other components of the quantum principles, specifically post-quantum algorithms and quantum key distribution. This includes determining which of the proposed post-quantum algorithms will provide the most robust and durable security while minimizing operational impacts and costs. Similarly, there are multiple different types of quantum key distribution under development that meet a range of needs, and potentially causing confusion among early adopters.

Barrier 2: Building the quantum security ecosystem

Currently, there is a major gap in both awareness of and information about the potential applications, risks and security solutions associated with quantum technology. For leaders charged with ensuring the security and integrity of the systems on which businesses rely, there is still hyperbole in the quantum security debate. The community can change this by building quantum literacy at the board and CEO level. This will require actions at the individual as well as the collective leadership level from gaining an inventory of information assets (including shared infrastructure) and developing a comprehensive understanding of risks potentially impacted by quantum technology to building a roadmap identifying key milestones and trigger events.

In parallel, this technology transition requires the urgent development of a pipeline of professionals to implement these principles effectively. The quantum security market alone is expected to grow to globally to $25 billion in just a few short years. The community needs to start investing in skills and the supply ecosystem must start preparing for a quantum-enabled safe and digitally secure posture. The acceleration of government-led initiatives such as those announced in the US, EU, India, Japan, and Australia will also help.

It is imperative that the cybersecurity community begins to build and accelerate its adoption of quantum security technology, and to move its value from the technical to the transformative space. This emerging technology is already being implemented to build a strong cybersecurity response to the potential cryptographic threat, but these new quantum-enabled technologies provide the basis for a new security foundation that will offer a step-change in our ability to secure digital infrastructure.

Continue reading here:

Here's why we need to build a quantum security coalition - World Economic Forum

Read More..

Quantum Computing for the Next Generation of Computer Scientists and Researchers – Campus Technology

C-Level View | Feature

A Q&A with Travis Humble

Travis Humble is a distinguished scientist and director of the Quantum Computing Institute at Oak Ridge National Laboratory. The institute is a lab-wide organization that brings together all of ORNL's capabilities to address the development of quantum computers. Humble is also an academic, holding a joint faculty appointment at the University of Tennessee, where he is an assistant professor with the Bredesen Center for Interdisciplinary Research and Graduate Education. In the following Q&A, Humble gives CT his unique perspectives on the advancement of quantum computing and its entry into higher education curricula and research.

"It's an exciting area that's largely understaffed. There are far more opportunities than there are people currently qualified to approach quantum computing." Travis Humble

Mary Grush: Working at the Oak Ridge National Laboratory as a scientist and at the University of Tennessee as an academic, you are in a remarkable position to watch both the development of the field of quantum computing and its growing importance in higher education curricula and research. First, let me ask about your role at the Bredesen Center for Interdisciplinary Research and Graduate Education. The Bredesen Center draws on resources from both ORNL and UT. Does the center help move quantum computing into the realm of higher education?

Travis Humble: Yes. The point of the Bredesen Center is to do interdisciplinary research, to educate graduate students, and to address the interfaces and frontiers of science that don't fall within the conventional departments.

For me, those objectives are strongly related to my role at the laboratory, where I am a scientist working in quantum information. And the joint work ORNL and UT do in quantum computing is training the next generation of the workforce that's going to be able to take advantage of the tools and research that we're developing at the laboratory.

Grush: Are ORNL and UT connected to bring students to the national lab to experience quantum computing?

Humble: They are so tightly connected that it works very well for us to have graduate students onsite performing research in these topics, while at the same time advancing their education through the university.

Grush: How does ORNL's Quantum Computing Institute, where you are director, promote quantum computing?

Humble: As part of my work with the Quantum Computing Institute, I manage research portfolios and direct resources towards our most critical needs at the moment. But I also use that responsibility as a gateway to get people involved with quantum computing: It's an exciting area that's largely understaffed. There are far more opportunities than there are people currently qualified to approach quantum computing.

The institute is a kind of storefront through which people from many different areas of science and engineering can become involved in quantum computing. It is there to help them get involved.

Grush: Let's get a bit of perspective on quantum computing why is it important?

Humble: Quantum computing is a new approach to the ways we could build computers and solve problems. This approach uses quantum mechanics that support the most fundamental theories of physics. We've had a lot of success in understanding quantum mechanics it's the technology that lasers, transistors, and a lot of things that we rely on today were built on.

But it turns out there's a lot of untapped potential there: We could take further advantage of some of the features of quantum physics, by building new types of technologies.

Read the original here:

Quantum Computing for the Next Generation of Computer Scientists and Researchers - Campus Technology

Read More..