Page 2,357«..1020..2,3562,3572,3582,359..2,3702,380..»

The Lies that Powered the Invention of Pong – IEEE Spectrum

Now comes a report on a quantum gas, called a Bose-Einstein condensate, which scientists at the Massachusetts Institute of Technology first stretched into a skinny rod, then rotated until it broke up. The result was a series of daughter vortices, each one a mini-me of the mother form.

The research, published in Nature, was conducted by a team of scientists affiliated with the MIT-Harvard Center for Ultracold Atoms and MITs Research Laboratory of Electronics.

The rotating quantum clouds, effectively quantum tornadoes, recall phenomena seen in the large-scale, classical world that we are familiar with. One example would be so-called Kelvin-Helmholtz clouds, which look like periodically repeating, serrated cartoon images of waves on the ocean.

These wave-shaped clouds, seen over an apartment complex in Denver, exhibit whats called Kelvin-Helmholtz instability.Rick Duffy/Wikipedia

The way to make quantum cloud vortices, though, involves more lab equipment and less atmospheric wind shear. We start with a Bose-Einstein condensate, 1 million sodium atoms that share one and the same quantum-mechanical wave function,, says Martin Zwierlein, a professor of physics at MIT.

The same mechanism that confines the gasan atom trap, made up of laser beamsallows the researchers to squeeze it and then spin it like a propeller. We know what direction were pushing, and we see the gas getting longer, he says. The same thing would happen to a drop of water if I were to spin it up in the same waythe drop would elongate while spinning.

What they actually see is effectively the shadow cast by the sodium atoms as they fluoresce when illuminated by laser light, a technique known as absorption imaging. Successive frames in a movie can be captured by a well-placed CCD camera.

At a particular rotation rate, the gas breaks up into little clouds. It develops these funny undulationswe call it flaky, then becomes even more extreme. We see how this gas crystalizes in a chain of dropletsin the last image there are eight droplets.

Why settle for a one-dimensional crystal when you can go for two? And in fact the researchers say they have done just that, in as yet unpublished research.

That a rotating quantum gas would break into blobs had been predicted by theorythat is, one could infer that this would happen from earlier theoretical work. We in the lab didnt expect thisI was not aware of the paper; we just found it, Zwierlein says. It took us a while to figure it out.

The crystalline form appears clearly in a magnified part of one of the images. Two connections, or bridges, can be seen in the quantum fluid, and instead of the single big hole youd see in water, the quantum fluid has a whole train of quantized vortices. In a magnified part of the image, the MIT researchers found a number of these little holelike patterns, chained together in regularly repeating fashion.

Its similar in what happens when clouds pass each other in the sky, he says. An originally homogeneous cloud starts forming successive fingers in the Kelvin-Helmholtz pattern.

Very pretty, you say, but surely there can be no practical application. Of course there can; the universe is quantum. The research at MIT is funded by DARPAthe Defense Research Advanced Project Agencywhich hopes to use a ring of quantum tornadoes as fabulously sensitive rotation sensors.

Today if youre a submarine lying under the sea, incommunicado, you might want to use a fiber optic gyroscope to detect slight rotational movement. Light travels in both one way and the other in the fiber, and if the entire thing is spinning, you should get an interference pattern. But if you use atoms rather than light, you should be able to do the job better, because atoms are so much slower. Such a quantum-tornado sensor could also measure slight changes in the earths rotation, perhaps to see how the core of the earth might be affecting things.

The MIT researchers have gone far down the rabbit hole, but not quite to the bottom of it. Those little daughter tornadoes can be confirmed as still being Bose-Einstein condensates because even the smallest ones still have about 10 atoms apiece. If you could get down to just one per vortex, youd have the quantum Hall effect, which is a different state of matter. And with two atoms per vortex, youd get a fractional quantum Hall fluid, with each atom doing its own thing, not sharing a wave function, Zwierlein says.

The quantum Hall effect is now used to define the ratio of Plancks constant divided by the charge of the electron squared (h/e2)a number called the von Klitzing constantwhich is about as basic as basic physics gets. But this effect is still not fully understood. Most studies have focused on the behavior of electrons, and the MIT researchers are trying to use sodium atoms as stand-ins, says Zwierlein.

So although theyre not all the way to the bottom of the scale yet, theres plenty of room for discovery on the way to the bottom. As Feynman also might have said (sort of).

From Your Site Articles

Related Articles Around the Web

Read the original post:

The Lies that Powered the Invention of Pong - IEEE Spectrum

Read More..

Maryland Today | Two Terps Named 2022 Churchill Scholars – Maryland Today

Naveen RamanRaman, a computer science and mathematics double major, has authored or co-authored seven conference papers on topics at the intersection of computer science, economics and social good.

The Churchill Scholarship will allow him to work on his M.Phil. in computer science in the University of Cambridge Computer Lab, where hell focus on the fairness of artificial intelligence (AI) and machine learning (ML) algorithms in critical fields such as criminal justice, job markets and health care. After that, Raman plans to pursue a Ph.D. in computer science.

AI and ML have the potential to revolutionize health care through improvements in clinical prognosis, but predicting patient outcomes and diseases is especially challenging for patients from marginalized communities due to data sparsity and bias, he said. I plan to combat these problems by developing robust learning algorithms that work in the presence of data perturbations and minimize error rates.

Raman began using intelligent computing with Distinguished University Professor of Computer Science Aravind Srinivasan and former computer science Assistant Professor Max Leiserson. He then worked with computer science Assistant Professor John Dickerson to develop policies that balance fairness and profit in ride-pooling systems, and now works with computer science Associate Professor Jordan Boyd-Graber to improve question answering systems by leveraging data from trivia competitions.

Naveen is working at the forefront of a broad portfolio of fieldssoftware engineering with his CMU colleagues, natural language processing with Jordan Boyd-Graber here at UMD, computer vision with his MIT Lincoln Labs colleagues, and EconCS meets fairness in AI with me, Dickerson said.

Raman, who attended Richard Montgomery High School in Rockville, Maryland, is a member of the Advanced Cybersecurity Experience for Students program in the Honors College and the Global Fellows program. He is also a Goldwater Scholar, Presidents Scholar, Philip Merrill Presidential Scholar and a Computing Research Association Outstanding Undergraduate Researcher finalist. He has been awarded the Brendan Iribe Endowed Scholarship, Capital One Bank Deans Scholarship in Computer Science and Corporate Partners in Computing Scholarship.

Ramans team won the National Academy Quiz Tournaments Division 2 Intercollegiate Championship Tournament during his freshman year. In 2020, he and two classmates received an honorable mention award in the 72-hour Mathematical Contest in Modeling.

He has been a teaching assistant for a programming languages class and the lead student instructor for a class on algorithms for coding interviews.

Off campus, Raman teaches math skills to underprivileged elementary school students in the Maryland Mentor Program and volunteered at the College Park Academy charter school helping students improve their math skills.

See more here:

Maryland Today | Two Terps Named 2022 Churchill Scholars - Maryland Today

Read More..

At IITs, Computer Science students offered job packages worth crores – The Indian Express

The first phase of placement season at the Indian Institutes of Technology (IITs) recently concluded with many securing hefty pay packages. Over 60 IIT Delhi students have received annual packages of more than Rs 1 crore, while a student from IIT-BHU has been recruited with an annual salary of Rs 2 crore.

Students from computer science engineering (CSE) across institutes have received higher salaries than their counterparts in other branches. Roles in organisations optimising emerging technologies have also seen a rise, with students getting an offer to work as machine learning engineers, decision analysts, and AI specialists.

Till December first week, a total of 87 companies belonging to software and IT, finance and banking, analytics and consulting, core engineering, e-commerce, automobile, infrastructure, manufacturing, and health care have recruited students for various profiles from IIT Patna.

Kripa Shankar Singh, Training and Placement Officer, IIT Patna, said, In BTech, the department of computer science and engineering (CSE) topped the list with 96 per cent placements, followed by electrical engineering, mechanical engineering, civil engineering, and chemical engineering. The highest packages have also been fetched by computer science students.

The highest domestic package of Rs 61.3 lakh per annum was offered to 9 IIT Patna students by Oracle, an American multinational computer technology corporation. The job profiles offered range from software engineer, hardware engineer, application engineer, product engineer, quant analyst, data scientist, digital consultant, manager, infrastructure analyst, machine learning engineer, digital engineer, decision analyst, consulting, management trainee, GET (Graduate Engineer Trainee) and PGET.

The offers paying the highest salaries are either for core computer science engineering roles or related to emerging technologies such as AI, machine learning, and data science, Singh added.

At the end of phase-I, the Indian Institute of Technology (IIT) Kanpur received an impressive 47 international offers. The highest packages so far are USD 287,550 for international and Rs 1.2 crore for domestic. The top recruiters include Intel, Microsoft, OLA, Samsung, Quadeye, Uber, Tiger Analytics, and Axtria, aglobal provider of cloud software and data analytics. Most of these companies are either tech development organisations or offer augmenting technology as a service.

At IIT Ropar, over 92 per cent of CSE students have received job offer in the first phase of placements, followed by mechanical engineering (72 per cent), civil engineering (74.07 per cent) and metallurgical and materials engineering (72.73 per cent).

Subodh Sharma, the institutes placement officer, said, As per the recommendations of the IIT Placement committee, we cannot disclose the highest package. But, students from CSE have received the highest paying job offers with roles in software development, core engineering, consulting, analytics, and finance domains.

Students at IIT (BHU) Varanasi received a total of 1,185 job offers in the first phase of placements. Amongst these, 35 students bagged international offers, with the highest package being Rs 2.15 crore per annum from Uber.

The Computer Science and Engineering branch has received the highest-paying job. The department has the highest package with an average CTC of Rs 44 lakhs in comparison to other divisions, said Pramod Kumar Jain, Director, IIT (BHU) Varanasi.

He added that this placement season observed a significant rise, where the departments of electronics and electrical engineering received an average package of Rs 29 lakhs per annum and Rs 28 lakhs per annum, respectively.

Excerpt from:

At IITs, Computer Science students offered job packages worth crores - The Indian Express

Read More..

"Bosom peril" is not "breast cancer": How weird computer-generated phrases help researchers find scientific publishing fraud -…

In 2020, despite the COVID pandemic, scientists authored 6 million peer-reviewed publications, a 10 percent increase compared to 2019. At first glance this big number seems like a good thing, a positive indicator of science advancing and knowledge spreading. Among these millions of papers, however, are thousands of fabricated articles, many from academics who feel compelled by a publish-or-perish mentality to produce, even if it means cheating.

But in a new twist to the age-old problem of academic fraud, modern plagiarists are making use of software and perhaps even emerging AI technologies to draft articlesand theyre getting away with it.

The growth in research publication combined with the availability of new digital technologies suggest computer-mediated fraud in scientific publication is only likely to get worse. Fraud like this not only affects the researchers and publications involved, but it can complicate scientific collaboration and slow down the pace of research. Perhaps the most dangerous outcome is that fraud erodes the publics trust in scientific research. Finding these cases is therefore a critical task for the scientific community.

We have been able to spot fraudulent research thanks in large part to one key tell that an article has been artificially manipulated: The nonsensical tortured phrases that fraudsters use in place of standard terms to avoid anti-plagiarism software. Our computer system, which we named the Problematic Paper Screener, searches through published science and seeks out tortured phrases in order to find suspect work. While this method works, as AI technology improves, spotting these fakes will likely become harder, raising the risk that more fake science makes it into journals.

What are tortured phrases? A tortured phrase is an established scientific concept paraphrased into a nonsensical sequence of words. Artificial intelligence becomes counterfeit consciousness. Mean square error becomes mean square blunder. Signal to noise becomes flag to clamor. Breast cancer becomes Bosom peril. Teachers may have noticed some of these phrases in students attempts to get good grades by using paraphrasing tools to evade plagiarism.

As of January 2022, weve found tortured phrases in 3,191 peer-reviewed articles published (and counting), including in reputable flagship publications. The two most frequent countries listed in the authors affiliations are India (71.2 percent) and China (6.3 percent). In one specific journal that had a high prevalence of tortured phrases, we also noticed the time between when an article was submitted and when it was accepted for publication declined from an average of 148 days in early 2020 to 42 days in early 2021. Many of these articles had authors affiliated with institutions in India and China, where the pressure to publish may be exceedingly high.

In China, for example, institutions have been documented to impose production targets that are nearly impossible to meet. Doctors affiliated with Chinese hospitals, for instance, have to get published to get promoted, but many are too busy in the hospital to do so.

Tortured phrases also star in lazy surveys of the literature: Someone copies abstracts from papers, paraphrases them, and pastes them in a document to form gibberish devoid of any meaning.

Our best guess for the source of tortured phrases is that authors are using automated paraphrasing toolsdozens can be easily found online. Crooked scientists are using these tools to copy text from various genuine sources, paraphrase them, and paste the tortured result into their own papers. How do we know this? A strong piece of evidence is that one can reproduce most tortured phrases by feeding established terms into paraphrasing software.

Using paraphrasing software can introduce factual errors. Replacing a word by its synonym in lay language may lead to a different scientific meaning. For example, in engineering literature, when accuracy replaces precision (or vice versa) different notions are mixed-up; the text is not only paraphrased but becomes wrong.

We also found published papers that appear to have been partly generated with AI language models like GPT-2, a system developed by OpenAI. Unlike papers where authors seem to have used paraphrasing software, which changes existing text, these AI models can produce text out of whole cloth.

While computer programs that can create science or math articles have been around for almost two decades (like SCIgen, a program developed by MIT graduate students in 2005 to create science papers, or Mathgen, which has been producing math papers since 2012), the newer AI language models present a thornier problem. Unlike the pure nonsense produced by Mathgen or SCIgen, the output of the AI systems is much harder to detect. For example, given the beginning of a sentence as a starting point, a model like GPT-2 can complete the sentence and even generate entire paragraphs. Some papers appear to be produced by these systems. We screened a sample of about 140,000 abstracts of papers published by Elsevier, an academic publisher, in 2021 with OpenAIs GPT-2 detector. Hundreds of suspect papers featuring synthetic text appeared in dozens of reputable journals.

AI could compound an existing problem in academic publishingthe paper mills that churn out articles for a priceby making paper mill fakes easier to produce and harder to suss out.

How we found tortured phrases. We spotted our first tortured phrase last spring while reviewing various papers for suspicious abnormalities, like evidence of citation gaming or references to predatory journals. Ever heard of profound neural organization? Computer scientists may recognize this as a distorted reference to a deep neural network. This led us to search for this phrase in the entire scientific literature where we found several other articles with the same bizarre language, some of which contained other tortured phrases, as well. Finding more and more articles with more and more tortured phrases (473 such phrases as of January 2022) we realized that the problem is big enough to be called out in public.

To track papers with tortured phrases, as well as meaningless papers produced by SCIgen or Mathgen (which have also made it into publications), we developed the Problematic Paper Screener. Behind the curtains, the software relies on open science tools to search for tortured phrases in scientific papers and to check whether others had already flagged issues. Finding problematic papers with tortured phrases has become a crowd effort, as researchers have used our software to find new phrases.

The problem of tortured phrases. Scientific editors and referees certainly reject buggy submissions with tortured phrases, but a fraction still evades their vigilance and gets published. This means, researchers could waste time filtering through published scams. Another problem is that interdisciplinary research could get bogged down by unreliable research, say, for example, if a public health expert wanted to collaborate with a computer scientist who published about a diagnostic tool in a fraudulent paper.

And as computers do more aggregating work, faulty articles could also jeopardize future AI-based research tools. For example, in 2019, the publisher Springer Nature used AI to analyze 1,086 publications and generate a handbook on lithium-ion batteries. The AI created coherent chapters and sections and succinct summaries of the articles. What if the source material for these sorts of projects were to include nonsensical, tortured publications?

The presence of this junk pseudo-scientific literature also undermines citizens trust in scientists and science, especially when it gets dragged into public policy debates.

Recently tortured phrases have even turned up in scientific literature on the COVID-19 pandemic. One paper published in July 2020, since retracted, was cited 52 times as of this month, despite mentioning the phrase extreme intense respiratory syndrome (SARS), which is clearly a reference to severe acute respiratory syndrome, the disease caused by the coronavirus SARS-CoV-1. Other papers contained the same tortured phrase.

Once fraudulent papers are found, getting them retracted is no easy task.

Editors and publishers who are members of the Committee on Publication Ethics must follow pre-established complex guidelines when they find problematic papers. But the process has a loophole. Publishers investigate the issue for months or years because they are supposed to wait for answers and explanations from authors for an undefined amount of time.

AI will help detect meaningless papers, erroneous ones, or those featuring tortured phrases. But this will be effective only in the short to medium term. AI checking tools could end up provoking an arms race in the longer term, when text-generating tools are pitted against those that detect artificial texts, potentially leading to ever-more-convincing fakes.

But there are few steps academia can take to address the problem of fraudulent papers.

Apart from a sense of achievement, there is no clear incentive for a reviewer to deliver a thoughtful critique of a submitted paper and no direct detrimental effect of peer-review performed carelessly. Incentivizing stricter checks during peer-review and once a paper is published will alleviate the problem. Promoting post-publication peer-review at PubPeer.com, where researchers can critique articles in an unofficial context, and encouraging other ways to engage the research community more broadly could shed light on suspicious science.

In our view the emergence of tortured phrases is a direct consequence of the publish-or-perish system. Scientists and policy makers need to question the intrinsic value of racking up high article counts as the most important career metric. Other production must be rewarded, including proper peer-reviews, data sets, preprints, and post-publication discussions. If we act now, we have a chance to pass a sustainable scientific environment onward to the future generations of researchers.

Read more:

"Bosom peril" is not "breast cancer": How weird computer-generated phrases help researchers find scientific publishing fraud -...

Read More..

Robotic assistive device will lend a helping hand to infants with movement difficulties – UC Riverside

Researchers at UC Riverside have received a $1.5 million grant from the National Science Foundation to develop a robotic assistive device to help infants with movement difficulties. The soft wearable device will fit over little arms to support them or offer an extra boost in their movements.

The goal for the device is to provide as-needed assistance by autonomously yielding to the users intention, or applying assistive forces to help the user's arm reach the desired object, said Konstantinos Karydis, an assistant professor of electrical and computer engineering in the Marlan and Rosemary Bourns College of Engineering and grant lead researcher.

Neuromuscular disorders, such as muscular dystrophy, make movement difficult for infants, who often require motor training to help strengthen their movements and minimize developmental delays. The goal for the robotic device under development is to help infants perform and learn movements, similar to what they would do during a motor training session.

The device will perceive the intention of an infant to reach for an object and help their arm, but most of the work will be done by the infant, said Elena Kokkoni, an assistant professor of bioengineering and co-lead researcher on the grant.

The device will leverage soft robotics technology being developed in Karydiss lab, as well as an array of human-centered closed-loop control strategies by other UCR investigators. Salman Asif, an assistant professor of electrical and computer engineering, will develop a lensless camera system to help users perceive the environment, such as the position of a target object. Bioengineering professor William Grover will help improve the safety and efficiency of the device via air-powered logic circuits that dramatically reduce the amount of electronic hardware required to control soft robots. And computer science and engineering professor Philip Brisk will help achieve real-time execution of the control, sensing and actuation via efficient distributed computation algorithms.

Once they have created a prototype, the team will test the device with neurotypical infants as well as infants with neuromuscular diseases of different severity levels, from those with fewer or lower quality movement to those that cannot move at all.

See the original post here:

Robotic assistive device will lend a helping hand to infants with movement difficulties - UC Riverside

Read More..

Unicorn startups: The education of 3 business leaders – Study International News

In the business world, unicorn startups have nothing to do with mythical ponies the term represents accomplished private companies that are worth over a billion US dollars.

According to reports, it doesnt look like there will be a shortage of unicorn startups anytime soon. The US, for instance, is home to most unicorn startups operating within the cloud, fintech, health tech, big data and cybersecurity.

While the success of aunicorn startup could include various factors from the idea to timing, the education of its founders can play a huge role too.

Professor Ilya Strebulaev from Stanford Graduate School of Business has made extensive research on 1,263 founders from 521 unicorn startups, on what level of academic qualifications they hold.

Research shows that 236 founders have an MBA, including DoorDash CEO Tony Xu. A total of 217 founders have masters, other than MBAs. Alphabet Incs Larry Page, for instance, has a Masters in Computer Science and Juul Labs Adam Bowen has a Masters in Product Design. A total of 39 people have a dual masters, including an MBA, such as Christian Chabot of Tableau.

A total of 286 founders earned doctoral degrees, such as PhD and MD, while 15 have an MBA and a doctoral degree. On the other end of the spectrum, the majority of founders only hold a bachelors degree while the rest are dropouts.

This research suggests that academic qualifications, even a bachelors, can play an important role in ones career.

John Collison is an Irish billionaire entrepreneur and the co-founder and President of Stripe which he co-founded in 2010 with his brother Patrick Collison. Source: Jacques Demarthon / AFP

John Collison an Irish billionaire entrepreneur is the co-founder and President of Stripe. The Irish-Americanfinancial serviceandsoftware as a service (SaaS)company is dual-headquartered inSan FranciscoandDublin.

The company chiefly offers payment processing software and application programming interfaces (APIs) for e-commerce websites and mobile applications.

Collison co-founded the company in 2010 with his brother Patrick, and was known as the youngest self-made billionaire in 2016. According to reports, he got his degree at Castletroy College and also studied at Harvard.

Shing Chow is the founder and CEO of Lalamove. Source: Lalamove

Shing Chow is a billionaire from Hong Kong, and is the founder and CEO of Lalamove which he founded in 2013. Before Chow created his unicorn startup, he studied abroad in the US at the University of California, Los Angeles and Stanford University, studying a BS Physics and BA Economics respectively (he graduated with distinction for the latter).

Lalamove is an Asia-based technology company that provides delivery services by connecting users with delivery drivers on its mobile and web apps.

The company operates in cities across Asia and Latin America, connecting over seven million users with more than 700,000 delivery drivers.

Ferry Unardi is an Indonesian co-founder and CEO of Traveloka. Source: lifepal

Ferry Unardi is an Indonesian co-founder and CEO of Traveloka, an Indonesian technology companythat provides airline ticketing and hotel booking services online expanding rapidly into Southeast Asiaand Australia.

Unardi co-founded Traveloka with Albert Zhang and Derianto Kusuma in 2012. He is responsible for the companys overall direction and strategy. Previously, he spent several years working on real-time media performance and reliability at Microsoft Lync.

Unardi studied abroad in the US at Purdue University and Harvard Business School. He pursued a BS in Mathematics and Computer Sciences and a Masters in Business Administration respectively.

Traveloka recently expanded to provide lifestyle products and services, such as attraction tickets, activities, car rental, and restaurant vouchers.

Read more:

Unicorn startups: The education of 3 business leaders - Study International News

Read More..

The essential role of AI in cloud technology – Techradar

As multiple industries shift more into the world of cloud computing, talks of Artificial Intelligence (AI) integration in order to enhance cloud performance has continued at a dramatic pace. Combining both AI and cloud technology together, is beneficial to varying degrees, nevertheless, there is still some further progress to be made across the substantial challenges that technical developers are facing for a more cohesive integration.

About the author

Robert Belgrave is Chief Executive Officer at Pax8 UK.

Cloud computing alone allows companies to be more flexible whilst simultaneously providing economic value when hosting data and applications on the cloud. AI-powered analytical data insights plays an essential role in its enhanced capabilities in data management However, it begs the question, can AI and cloud unification streamline data efficiently and what other benefits can arise from this integration?

Due to the financial and personal sensitivity in which organizations carry, thoughts also turn to the important question of integration effectiveness and more specifically how well it can protect privacy whilst companies are continually at risk of a potentially serious cybersecurity breach, especially because an increased rate of workforces are now working from home remotely. What many fail to realize, however, is that the cloud itself has incredibly secure security measures which block malicious web traffic through its extensive cloud firewall. An AI system substantially heightens this protection - detecting fraudulent activity based on its analytics, and anticipating multiple attacks before they even occur. In other words - having both AI and cloud technology is akin to having the ultimate super-team protection during online activity.

As increasingly more enterprises choose to invest in cloud technology, there has been a noticeable difference throughout company structures, where workflow has become more streamlined. It is clear that cloud computing as a whole, offers more agility by having all information readily available online. Data can be shared instantly between multiple devices, among various people within a company, reaching employees both across the office, and in different continents. AI offers a whole new layer to optimizing work systems, and data analysis through formed patterns, providing solutions for better quality of service for customers.

This optimization is essential due to the amount of data that the cloud possesses. Focusing on workflow enhancements in particular through this integration process improves productivity and mitigates errors in data processes. The cloud holds company information, plus the data from each employee, and with new information coming in each day, it is important to be able to command it in the most flexible and agile way that drives the digital transformation of the organization as whole.

In this current digital age, AI has the potential to impact businesses across multiple sectors substantially. When considering all of the techniques of AI utilities, it is estimated that between $3.5 trillion and $5.8 trillion could be generated annually across 19 countries, simply by integrating AI into their online workspaces. It has been predicted that cloud computing could be able to self-manage once the AI technology advances and becomes substantially more sophisticated. This means that the system would be able to monitor and manage any issues that arise and fix the issues itself, which would in turn, allow technical developers to focus their attention on the bigger picture of the strategic value of the company rather than simple system repairs. This results in a unique and powerful combination that companies can use to their advantage.

Lowering costs is a feat that every business around the globe is trying to achieve, and with cloud technology and AI integration, it can become reality. These automated solutions simplify tasks immensely, eradicating the need for manned data centers within organizations. Costs are also cut in research and development, as the AI/cloud integration can do those tasks at no additional cost.

While the cost-effective benefits of merging AI with cloud technology has many companies smiling, it calls into question the ethics behind employee security. Previously, there have been utterances of AI replacing a human workforce which has continually dispelled over time. Nonetheless, it does not stop workers from being concerned that AI could begin to play a larger role in a company than they do in the future.

With optimization on the tip of enterprises tongue, and a lessened need for workers in positions that operating systems can do better, fasting and with fewer errors, concerns are justified. It is the role of employers to assure their employees that these systems are there to work alongside them to increase work efficiency and to understand that its not there to replace human ability, but to augment it.

There are also concerns regarding the importance of privacy of AI/Cloud systems. As previously stated, it is a wonderful tool to secure online systems to prevent fraudulent activity - but can it be too secure? Some of the data analysis can result in false positives, accusing consumers incorrectly and inconveniencing them by the same system designed to help them. Errors like these show that human monitors are still required to ensure cases like these are few, and are able to correct these mistakes when they do occur.

Cloud technology and AI evolving simultaneously can completely change the way people communicate and interact with technology on the whole. While yes, there are varying concerns regarding how much value AI can truly deliver if there isnt sufficient quality data available. However, when adequate data is on hand, the integration of these advanced technologies can reduce the complexity of system processes, and aid us all with the understanding to take better courses of action.

Having technology that creates innovative ideas in order to improve upon the market, not only benefits the enterprises utilizing the tech, but also the consumers who may rely on the result of these ideas. AI and cloud technology are being utilized at an ever increasing rate, and they are propelling the wider use of tech within society to new heights, and it is not expected to slow down any time soon.

Link:
The essential role of AI in cloud technology - Techradar

Read More..

What to consider when selecting an IaaS provider – TechTarget

Technical expertise is table stakes when it comes to evaluating IaaS providers. To ensure you find the right fit for your business, evaluate various IaaS providers and dig under the surface of potential partners who might meet your needs.

With no shortage of hosting providers available, selecting an ideal IT partner can be challenging. Help your search go smoothly by identifying specific criteria you must evaluate in a potential IaaS provider.

You must decide first whether to buy infrastructure versus consume it as a service. You've probably heard the myth year-over-year that 90% of workloads are hosted in the cloud, but the reality is they aren't. Moving the backbone of your entire organization to a cloud model is not as easy as headlines make it seem and creates potential issues in the process.

On one hand, purchasing IT infrastructure gives your team the ability to protect sensitive information and meet regulatory needs with complete control over the hardware it lives on. It also keeps with your finance department's traditional cost model. For some organizations, owning infrastructure is a must. You manage the total cost of ownership and have the ability to customize any solution. On the other hand, the upfront cost and the process of predicting what future resources you might require can be major deterrents. Between time, personnel and unknown or anticipated growth, maintaining your own infrastructure can rapidly change and demand more effort than anticipated.

McKinsey's 2018 "IT as a Service (ITaaS) Survey"reported a 65/35 split between private workloads and public workloads. This trend continues to shift the balance from building to consuming IT. Furthermore, 40% of companies use two or more IaaS and SaaS providers, narrowing the gap.

IaaS models' predictable payments provide agility and flexibility to grow or shrink as needed. In-house IT teams can determine the right mix of on-premises and cloud infrastructure for their businesses. No one-size-fits-all solution exists, but with due diligence and proactive planning, you can make an informed decision for your team and company.

When pursuing a combination of on-premises and cloud solutions, IaaS stands as an efficient option to lower the barrier to entry for the cloud. IaaS reduces the large upfront payment on hardware which may or may not be the correct infrastructure several years from now. IaaS is specifically tailored to the applications you use and it instantly frees up in-house resources by putting the burden of daily maintenance and upgrades on your IaaS provider instead of on your internal resources. Choosing IaaS has many advantages in itself, as long as the partner you work with is an expert.

Keep these seven criteria in mind when starting your search for an IaaS vendor:

Having candid conversations that address the above criteria can help you quickly identify winners and losers in your search for an IaaS provider. Your IaaS vendor should serve as a true partner, able to provide technology advice and a comprehensive strategy specific to your business goals.

About Hannah Coney

Hannah Coney leads ComportSecure's suite of cloud-based solutions. Her expertise in BaaS, DRaaS, IaaS and managed IT services helps customers navigate the transition to the cloud and optimize IT environments.

Excerpt from:
What to consider when selecting an IaaS provider - TechTarget

Read More..

Top 8 trends for the security industry in 2022 – IndianWeb2.com

HANGZHOU, China, Jan. 14, 2022 /PRNewswire/ -- Entering 2022, the world continues to endure the pandemic. But the security industry has, no doubt, continued to shift, adapt, and develop in spite of things. Several trends have even accelerated. Beyond traditional "physical security," a host of frontiers like AI, cloud computing, IoT, and cybersecurity are being rapidly pioneered by entities big and small in our industry.By all appearances, the security industry is in a stage of redefining itself. It is moving from mere security and safety protections to encompass a wider scope of activity that will expand safety while also bringing new levels of intelligence and sustainability to communities, companies and societies.Here, Hikvision would like to share some of our ideas and expectations about key trends that will likely affect the security industry in 2022 and perhaps even further into the future.1. AI will be everywhereNowadays, Artificial Intelligence is quite common in the security industry. More customers in the industry have recognized the value of AI, and have found new uses for AI applications in various scenarios. Along with ANPR, automated event alerts, and false alarm reduction, AI technologies are being used for wider applications, like personal protective equipment (PPE) detection, fall detection for the elderly, mine surface detection, and much more. Meanwhile, we also have seen more collaboration across the industry, with security manufacturers opening their hardware products to third-party AI applications, and launching open platforms for customers to create and train their own AI algorithms to meet customized needs.AI has been one of the fundamental technologies to reshape the security industry. Benefiting from the optimization of algorithms, as well as the improved computing performance and the decreased cost of chips due to the advancement of semiconductor technology in recent years, AI applications are gradually forming the basic functions and capabilities accepted by all sectors in the industry, and we predict an even stronger tendency to assert that "AI will be everywhere."2. AIoT will digitize and pervade industry verticalsWith more security cameras and other security devices being connected to the network, the security industry is becoming an important part of an IoT world, enriching its visual capabilities. It's apparent that the boundaries of the security industry are blurring, going well beyond the physical security arena. Meanwhile the popularization of AI technology enables the connected devices to become intelligent "things" in the IoT world. The combination of AI and IoT, or as we call it, AIoT, is taking the security industry to a higher plain, automating the workflows and procedures of enterprises and aiding in the digital transformation of various industry verticals such as energy, logistics, manufacturing, retail, education, healthcare, etc.From our perspective, AIoT brings more possibilities to the industry with rapidly expanding applications for security devices and systems. Meanwhile, more perception capabilities like radar, Lidar, temperature measuring, humidity sensing, and gas leak detection are being added to security devices and systems to make them more powerful. These new devices shoulder a multiplicity of tasks that just a few years ago required several different devices, covering both security functions and other intelligent functions for an ever-advancing world.3. Converged systems will break down data silosWorkers throughout private enterprises and public service sectors alike would jump at the chance to get rid of obstructive "data silos." Data and information scattered and isolated in disparate systems or groups creates barriers to information sharing and collaboration, preventing managers from getting a holistic view oftheir operations. Here, the convergence of various information systems has been proven to be an effective approach hopefully enough to break down those silos.It's clear the trend in the security industry has been to make efforts to converge systems wherever possible, including video, access control, alarms, fire prevention, and emergency management, to name a few. Further, more non-security systems, like human resources, finance, inventory, and logistics systems are also converging onto unified management platforms to increase collaboration and to support management in better decision-making based on more comprehensive data and analytics.4. Cloud-based solutions and services will be essentialLike AI, the cloud is not a new trend in our industry, but it is an expanding one. From small business markets to enterprise levels, we can see the momentum push more and more businesses to leverage cloud-based security solutions and services. And as we are witnessing even now, the pandemic has accelerated the movement to cloud-based operations for people and businesses around the world.All businesses want platforms or services that offer simplicity, with as few assets to manage as possible, and a setup that's as simple as possible. This is precisely where the cloud delivers. With a cloud-hosting infrastructure, there is no need for a local server or software. Users can conveniently check the status of their assets and businesses in real time, receive security events and alarms quickly, and accomplish emergency responses simply using a mobile app. For security business operators, the cloud enables them to remotely help their clients configure devices, fix bugs, maintain and upgrade security systems, and provide better value-added services.5. Crystal clear security imaging will be standard in any weather, under any conditions, any time of day or nightIt is always vital for video security cameras to maintain image clarity and capture details 24 hours a day, in any weather and under any condition. Cameras with low light imaging technology that renders high-definition and full-color images at night and in nearly completely dark environments have been very welcome in the market. We are seeing the impressive technology applied to more camera models, including 4K, varifocal and PTZ cameras. Moreover, for clearer video security imaging in poor visibility especially in severe weather high-performance imaging sensors, ISP technology, and AI algorithms are being employed, enabling cameras to maintain clarity and details of view.Speaking of imaging technology, the trend toward incorporating multiple lenses in new cameras cannot be ignored. Single-lens cameras are limited in their ability to get more details at greater distances and get the whole picture in large-scale places. They do only one or the other. But by employing two or more imaging lenses in one camera, multi-lens cameras can simultaneously deliver both panoramas and detailed, zoomed-in views of the same large site. Applications including airports, harbors, transit stations, parking lots, stadiums and squares will see these multi-lens cameras as a boon on every level.6. Biometric access control will bring higher security and efficiencyIn the past decades, authorized access control has moved a long way away from keys, pin codes and ID cards. We now find ourselves stepping into the era of biometrics. The access control market is rapidly becoming occupied by biometric authentications, from fingerprint and palmprint recognition to facial and iris recognition.Biometric access controls bring inherent advantages, like higher security and efficiency with reduced counterfeiting. They verify within seconds or fractions of seconds and prevent unnecessary physical contact. Iris, palmprint, and facial recognition offer touchless access control, a hygienic practice more and more favored as a result of the pandemic.7. The Zero Trust approach will take the cybersecurity spotlightWith more security devices connecting over the Internet than anyone ever imagined, cybersecurity has become an immense challenge in the industry. Stricter data security and privacy protection regulations have recently been introduced in the world's key markets, like the EU's GDPR and the Data Security Law in China, placing higher demands on cybersecurity. And in 2021, several landmark ransomware attacks on a variety of enterprises convinced us in no uncertain terms that companies in every industry must reinforce their network security architecture and strengthen their online protections.So how do we address growing cybersecurity concerns? Though the concept actually developed in 2010, the term "Zero Trust" has become a hot word just in recent years. A strategic initiative that developed to prevent data breaches by eliminating the concept of trust from an organization's network architecture, Zero Trust is rooted in a philosophy of "never trust, always verify." The concept has been roundly accepted within the IT industry and it is now also slowly but steadily moving into the physical security realm, as it gradually becomes an important part of the IoT world.8. Green manufacturing and low-carbon initiatives will take big stridesThe consensus is in: low-carbon initiatives are valued by societies around the world. In the security market, we have seen products featuring low-power-consumption become the preferred options for customers, and demands for solar-powered cameras are increasing.Meanwhile, local laws, regulations and policies that restrict carbon emission standards for manufacturing enterprises are pushing industries toward adopting more environmentally-conscious practices in their daily operations and production, which includes using more environment-friendly materials and adopting multiple energy-efficient designs in product manufacturing processes. We are delighted to see that more security industry manufacturers are exploring "green" manufacturing, and are committed to lowering their carbon output. Though it will take time, the movement has begun. We expect to see significant strides in this area in 2022.Find out moreTo find out more about anything discussed here, or to discover Hikvision's insights that are delivering latest trends of security, please visit our Hikvision Blog site.Photo -https://mma.prnewswire.com/media/1726672/Top_8_trends_security_industry_2022.jpg

More:
Top 8 trends for the security industry in 2022 - IndianWeb2.com

Read More..

The age of AI-ism – TechTalks

By Rich Heimann

I recently read The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. The book describes itself as an essential roadmap to our present and our future. We certainly need more business-, government-, and philosophical-centric books on artificial intelligence rather than hype and fantasy. Despite high hopes, in terms of its promise as a roadmap, the book is wanting.

Some of the reviews on Amazon focused on the lack of examples of artificial intelligence and the fact that the few provided, like Halicin and AlphaZero, are banal and repeatedly filled up the pages. These reviews are correct in a narrow sense. However, the book is meant to be conceptual, so few examples are understandable. Considering that there are no actual examples of artificial intelligence, finding any is always an accomplishment.

Frivolity aside, the book is troubling because it promotes some doubtful philosophical explanations that I would like to discuss further. I know what you must be thinking. However, this review is necessary because the authors attempt to convince readers that AI puts human identity at risk.

The authors ask, if AI thinks, or approximates thinking, who are we? (p. 20). While this statement may satiate a spiritual need by the authors and provide them a purpose to save us, it is unfair under the vague auspices of AI to even talk about such an existential risk.

We could leave it at that, but the authors represent important spheres of society (e.g., Silicon Valley, government, and academia); therefore, the claim demands further inspection. As we see governments worldwide dedicating more resources and authorizing more power to newly created organizations and positions, we must ask ourselves if these spheres, organizations, and leaders reflect our shared goals and values. This is a consequential inquiry, and to prove it, the authors determine the same pursuit. They declare that societies across the globe need to reconcile technology with their values, structures, and social contracts (p. 21) and add that while the number of individuals capable of creating AI is growing, the ranks of those contemplating this technologys implications for humanitysocial, legal, philosophical, spiritual, moralremain dangerously thin. (p. 26)

To answer the most basic question, if AI thinks,who are we? the book begins by explaining where we are (Chapter One: Where We Are). But, where we are is a suspicious jumping-off point because it is not where we are, and it indeed fails to tell us where AI is. It also fails to tell us where AI was as where we are is inherently ahistorical. AI did not start, nor end, in 2017 with the victory of AlphaZero over Stockfish in a chess match. Moreover, AlphaZero beating Stockfish is not evidence, let alone proof, that machines think. Such an arbitrary story creates the illusion of inevitability or conclusiveness in a field historically with neither.

The authors quickly turn from where we are into who we are. And, who we are, according to the authors, are thinking brains. They argue that the AI age needs its own Descartes by offering the reader the philosophical work of Ren Descartes. (p. 177) Specifically, the authors present Descartes dictum, I think, therefore I am, as proof that thinking is who we are. Unfortunately, this is not what Descartes meant with his silly dictum. Descartes meant to prove his existence by arguing that his thoughts were more real and his body less real. Unfortunately, things dont exist more or less. (Thomas Hobbes famous objection asked, Does reality admit of more and less?) The epistemological pursuit of understanding what we can know by manipulating what is, was not a personality disorder in the 17th century.

It is not uncommon to involve Descartes when discussing artificial intelligence. However, the irony is that Descartes would not have considered AI thinking at all. Descartes, who was familiar with the automata and mechanical toys of the 17th century, suggested that the bodies of animals are nothing more than complex machines. However, the I in Descartes dictum treats the human mind as non-mechanical and non-computational. Descartess dualism treats the human mind as non-computational and contradicts that AI is, or can ever, think. The double irony is that what Descartes thinks about thinking is not a property of his identity or his thinking. We will come back to this point.

To be sure, thinking is a prominent characteristic of being human. Moreover, reason is our primary means of understanding the world. The French philosopher and mathematician Marquis de Condorcet argued that reasoning and acquiring new knowledge would advance human goals. He even provided examples of science impacting food production to better support larger populations and science extending the human life span, well before they emerged. However, Descartess argument fails to show why thinking and not rage or love is as valid to least doubt ones existence.

The authors also imply that Descartess dictum meant to undermine religion by disrupting the established monopoly on information, which was largely in the hands of the church. (p. 20). While largely is doing much heavy lifting, the authors overlook that the Cogito argument (I think, therefore I am) was meant to support the existence of God. Descartes thought what is more perfect cannot arise from what is less perfect and was convinced that his thought of God was put there by someone more perfect than him.

Of course, I can think of something more perfect than me. It does not mean that thing exists. AI is filled with similarly modified ontological arguments. A solution with intelligence more perfect than human intelligence must exist because it can be thought into existence. AI is cartesian. You can decide if that is good or bad.

If we are going to criticize religion and promote pure thinking, Descartes is the wrong man for the job. We ought to consider Friedrich Nietzsche. The father of nihilism, Nietzsche, did not equivocate. He believed that the advancement of society meant destroying God. He rejected all concepts of good and evil, even secular ones, which he saw as adaptations of Judeo-Christian ideas. Nietzsches Beyond Good and Evil explains that secular ideas of good and evil do not reject God. According to Nietzsche, going beyond God is to go beyond good and evil. Today, Nietzsches philosophy is ignored because it points, at least indirectly, to the oppressive totalitarian regimes of the twentieth century.

This thought isnt endorsing religion, antimaterialism, or nonsecular government. Instead, this explanation is meant to highlight that antireligious sentiment is often used to swap out religious beliefs with studied scripture and moral precepts for unknown moral precepts and opaque nonscriptural. It is a kind of religion, and in this case, the authors even gaslight nonbelievers calling those that reject AI like the Amish and the Mennonites. (p. 154) Ouch. That said, this conversation isnt merely that we believe or value at all, something that machines can never do or be, but that some beliefs are more valuable than others. The authors do not promote or reject any values aside from reasoning, which is a process, not a set of values.

None of this shows any obsolescence for philosophyquite the opposite. In my opinion, we need philosophy. The best place to start is to embrace many of the philosophical ideas of the Enlightenment. However, the authors repeatedly kill the Enlightenment idea despite repeated references to the Enlightenment. The Age of AI creates a story where human potential is inert and at risk from artificial intelligence by asking who are we? and denying that humans are exceptional. At a minimum, we should embrace the belief that humans are unique with the unique ability to reason, but not reduce humans to just thinking, much less transfer all uniqueness and potential to AI.

The question, if AI thinks, or approximates thinking, who are we? begins with the false premise that artificial intelligence is solved, or only the details need to be worked out. This belief is so widespread that it is no longer viewed as an assumption that requires skepticism. It also represents the very problem it attempts to solve by marginalizing humans at all stages of problem-solving. Examples like Halicin and AlphaZero are accomplishments in problem-solving and human ingenuity, not artificial intelligence. Humans found these problems, framed them, and solved them at the expense of other competing problems using the technology available. We dont run around and claim that microscopes can see or give credit to a microscope when there is a discovery.

The question is built upon another flawed premise: our human identity is thinking. However, we are primarily emotional, which drives our understanding and decision-making. AI will not supplant the emotional provocations unique to humans that motivate us to seek new knowledge and solve new problems to survive, connect, and reproduce. AI also lacks the emotion that decides when, how, and should be deployed.

The false conclusion in all of this is that because of AI, humanity faces an existential risk. The problem with this framing, aside from the pesky, false premises, is that when a threat is framed in this way, the danger justifies any action which may be the most significant danger of all.

My book, Doing AI, explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving.

About the author

Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.

Go here to see the original:
The age of AI-ism - TechTalks

Read More..