Page 2,129«..1020..2,1282,1292,1302,131..2,1402,150..»

Failing to Do Is Especially Inappropriate Russian World Chess Champion Strongly Criticizes Novak Djokovics Views on Wimbledon Ban – EssentiallySports

The Russian World Chess Champion Garry Kasparov has issued a strong response to the comments made by Serbian Tennis maestro Novak Djokovic. who earlier had hit out at the contentious move by the All England Club to ban players from Russia and Belarus. The 34-year-old had recently said that while he condemns the ongoing, the Wimbledon ban on players from Russia and Belarus was unacceptable.

ADVERTISEMENT

Article continues below this ad

Many players, including Djokovic, have criticized the decision. Although the tennis world stands divided on this issue. With many welcoming the decision.

ADVERTISEMENT

Article continues below this ad

Garry Kasparov has been vocal about the Russia-Ukraine conflict. He criticized Djokovic for supporting Russian athletes. In a response to the World Number 1 tennis player, Kasparov deemed his comments as inappropriate. He took to Twitter to remind Djokovic of the traumas of war faced by Serbia.

He wrote: Russia may play by ranking, but they kill by nationality. Russian athletes who do not condemn President Vladimir Putins war of extermination in Ukraine are supporting it with silence.

The all-time great Russian chess grandmaster expressed his disappointment over the fact that Novak Djokovic, being from Serbia, has still supported Russian players.

Djokovic, being a Serb, knows very well what conflict does to a nation. Although his view differs completely from Kasparovs, as he believes that politics should not mix with sports. and a Serb failing to do so is especially inappropriate, considering history, he added.

ADVERTISEMENT

Article continues below this ad

Djokovic is all set to defend his Wimbledon title. As the organizers said, players will not need to be vaccinated against COVID-19 in order to compete at the tournament. The tennis star, who couldnt play in the Australian Open after being deported over his vaccination status, will play in the French Open. Since vaccine passports are no longer required in France.

He is one title away from tying Rafael Nadal, who has 21 major titles. Although the Serb is not in the best of his form. He remains without an ATP title after losing to Russias Andrey Rublev in the final of the Serbia Open.

ADVERTISEMENT

Article continues below this ad

Watch this story: Couldnt Afford to Buy Japanese Tennis Player Shares His Wholesome Experience After Practicing With Rafael Nadal

Even though the current form suggests something else, Novaks fans will hope for a stronger comeback. And to brush aside the recent setbacks to win a major title.

Here is the original post:
Failing to Do Is Especially Inappropriate Russian World Chess Champion Strongly Criticizes Novak Djokovics Views on Wimbledon Ban - EssentiallySports

Read More..

NSW gov looks to replace main website hosting platform – iTnews

The NSW government is set to replace the cloud hosting platforms underpinning its biggest websites to ensure seamless scalability and ongoing operation can continue, following record traffic during Covid-19.

The Department of Customer Service approached the market for a new managed hosting and development services arrangement to support its digital channels, as work continues to consolidate websites.

Efforts to drastically reduce the number of government sites, which numbered 500 prior to the start of the project, have been ongoing since early 2020, with sites progressively migrating to the nsw.gov.au domain.

Now known as OneCX, the project aims to create a customer-centric digital experience based on customer needs, whereby information is discoverable through nsw.gov.au, and services and transactions via service.nsw.gov.au.

As the consolidation of sites and services continues, traffic has grown significantly on the two domains, particularly during the Covid-19 lockdowns last year, which frequentlysaw changes to restrictions as the government sought to reduce spread of the virus.

There has been significant organic growth of nsw.gov.au, service.nsw.gov.au in recent times and both platforms are expected to grow further in the future, DCS said in the request for quotation last week.

Data provided by department shows a significant increase in traffic on nsw.gov.au in the lead up to and during last years Delta wave, climbing from 32 million page views in June 2021 to 80 million in August 2021, before falling sharply.

In the lead up to the Delta wave, and as the governments OneCX consolidation project took off, page views across nsw.gov.au and service.nsw.gov averaged 11 million between March 2021 and May 2021.

On service.nsw.gov.au, page views reached 45 million in August 2021, and have since fallen to 21 million. As the main entry point for online transactions, the government requires the website to have high uptime availability... with 24/7 support.

DCS said it is looking for managed hosting and infrastructure support, as well as support services to ensure the ongoing operation of these websites and capacity to scale up when required, ensuring ongoing operation.

The new scalable cloud hosting platform or platforms would need to support medium to large-scale run on a Drupal CMS with 24/7 site monitoring and support for both nsw.gov.au and service.nsw.gov.au.

Additionally, there is an opportunity to ensure continuous improvement through enhancement services that can be rapidly scaled for ongoing initiatives, such as the OneCX program, the department added.

Submissions to the request for quote close May 30, with a contract expected sometime in July.

More here:
NSW gov looks to replace main website hosting platform - iTnews

Read More..

EdgeConneX enters Indonesian Market with the acquisition of GTN, plans for a 90MW hyperscale data center campus – Daily Host News

With the acquisition of GTN data center inIndonesia, global hyperlocal to hyperscale data center solutions, EdgeConneX, has announced that it will further expand its presence inAsia. This will be EdgeConneX ninth market inAsia.

GTN, which was established via a joint venture betweenJapansMitsui and local IT distributor and integrator PT Multipolar Technology, has been operating a Tier 3 certified data center since 2016. GTN is strategically located in Bekasi Cikarang, part of greaterJakarta, and EdgeConneX has acquired a plot of land directly adjacent to GTN, which will allow for a future hyperscale data center campus that could support more than 90 MWs of capacity.

Our legacy is being able to successfully and quickly deliver data center infrastructure at the Edge, saidKelvin Fong, Managing Director (APAC) at EdgeConneX. As we continue to expand both our Edge and hyperscale data center platform globally, the planned hyperscale data center campus inJakartawill give us capabilities to meet our customers requirements for capacity in this vital and growing market in the APAC region.

EdgeConnex had announced plans to build a pan-Indian data center platform through a joint venture, AdaniConneX. It had also announced a strategic investment in Chayora, which is a leading data center operator inChina. Now the acquisition of the GTN data center inIndonesiarepresents the thirdmajor country inAsiathat EdgeConneX has entered as part of its global expansion strategy.

Structure Research forecasts that the data center market will reach nearly$650 millionby 2026, with nearly 2/3 of it coming from hyperscaler demand.Jakartais another market in the region witnessing rapid growth driven by cloud adoption and providing high-quality and reliable hyperscale digital infrastructure is essential to support the digital transformation ofIndonesia and help it serve as a regional gateway.

Entry intoIndonesiagives EdgeConneX presence in three of the largest countries in the world outsidethe United States. This is a market with tremendous long-term upside.Indonesiahas strong demographics, a rapidly rising homegrown technology sector, and is early in the adoption curve when it comes to outsourced infrastructure services like cloud and data centres, statedPhilbert Shih, Founder of Structure Research. The acquisition of an operating business and land plot fits exactly with what EdgeConneX is doing around Hyperlocal and Hyperscale. It can cater to local enterprises and service providers while having the capacity and runway to serve hyperscale clouds.Indonesia, unlike many markets in the world, is home to all the major US and Chinese hyperscale clouds, and this will create incredible volumes of demand for hyperscale data centers.

Read next:Cloud Leadership Summit 2021 brings together biggest names in the Indian cloud and hosting industry in Goa, India

See the original post here:
EdgeConneX enters Indonesian Market with the acquisition of GTN, plans for a 90MW hyperscale data center campus - Daily Host News

Read More..

What are the key differences between DaaS and VPN? – TechTarget

Remote access to corporate resources is essential to ensuring business continuity. There are a variety of ways to provide this access.

While both give remote users access to an organization's resources, DaaS and VPN differ in user-friendliness, performance, security and manageability. IT administrators should examine the similarities and differences between the two services to determine which one best suits their goals.

Desktop as a service (DaaS) gives end users access to a virtual desktop that is hosted in the cloud. With this option, IT admins can manage virtual desktops while the DaaS provider handles the hosting infrastructure setup and management. When end users connect to the virtual desktop, the DaaS provider streams the screen of the virtual desktop over a network to the endpoint devices. The display signal of the desktop is the only data that goes to end users' personal devices.

End users might need access to a corporate application that requires a SQL database connection on the corporate network. With DaaS, the virtual desktop already has the application installed. The network the DaaS desktop is on has access to the SQL database. The only thing the end user must do is log into the DaaS offering, start the virtual desktop and open the application. The end user then has fast access to the SQL database because it's on the same network as the virtual desktop.

DaaS technology is also centralized, which means that organizations can manage all aspects of deployment from a single administrative interface. If IT administrators need to apply an update to desktops or business applications, they can easily do so and the update will immediately be distributed. Because DaaS gives IT the power to decide when a new version of the application is available to end users, with DaaS image management and virtualization software, the IT admin can run the update only once.

Popular DaaS offerings include Microsoft Azure Virtual Desktop, Citrix Managed Desktops, VMware Horizon Cloud and Amazon Workspaces.

In addition to DaaS, VDI is another option for organizations to deploy virtual desktops. Both give an end user remote access to a virtual desktop and corporate resources, but there are some important differences between DaaS and VDI. With VDI, the organization creates, maintains and updates the virtual desktop environment. With DaaS, the DaaS provider handles these responsibilities for the back end and, in some cases, the front end of the deployment.

A VPN creates a tunnel, or agent, between two networks, allowing them to connect and transfer data. A business VPN enables end users to connect to corporate resources such as applications and data. A business VPN works via a client on end-user personal devices. The client can connect a private network over a public network -- such as the internet -- between that device and the corporate network. Users upload and download data over this connection. The virtual network is created with a secure sockets layer connection and is often end-to-end encrypted, enabling secure access between the networks.

Let's return to the example of an end user who needs access to a corporate application that requires a SQL database connection on the corporate network. The user signs into the VPN agent on the personal device, setting up the virtual network tunnel between the device and the corporate network. Because there is then a network connection through the VPN tunnel to the SQL database on the corporate network, the end user can start the corporate application locally from the device and it can reach this data. This makes VPN technology a decentralized approach.

Security is a significant consideration with VPNs. Users upload and download data when using a VPN, so data can end up on the end-user device.

With a VPN, every end-user device needs to have the corporate applications installed. The IT administrators must update every device when an application update is required as well. Because of this, VPN use is often combined with endpoint management tools such as Microsoft Endpoint Manager. With endpoint management software, IT organizations can distribute applications and updates to all devices, often through the internet. With its configuration, IT can also push the VPN agent updates with an endpoint management tool.

Security is a significant consideration with VPNs. Users upload and download data when using a VPN, so data can end up on the end-user device. In addition, a VPN gives end users direct access to a part of the corporate network. If the connection gets hacked, for example when using a weak or old digital certificate, the hacker has access to the company network. Network segmentation with VPNs is essential. Common examples of VPN software include OpenVPN, FirePass SSL VPN, NordLayer VPN and Cisco Systems VPN Client.

Both DaaS and VPN give secure remote access to applications and data, but the two options are rather different. VPNs are easy to set up on both the end-user side and the administrative side. They allow IT to onboard end users quickly. A new user downloads the VPN client, signs in and accesses the corporate network. With DaaS, the IT organization must give each new user a desktop, profile, home drive folder and other specific items, which might require more setup work than a VPN.

A VPN goes into the company network and, in doing so, provides access to applications and data. With the corporate data and applications at stake, security considerations include network segmentation, endpoint management and decentralizing applications. VPNs also rely heavily on internet quality for both the speed and the stability of the application. If, for example, an end user loses connection while updating a database, the database can get out of sync, with destructive results. Of course, VPNs require stable internet access as well, but if there are any network issues, a desktop running a VPN can still access local applications, documents and other aspects that don't rely on a secure internet connection.

With DaaS, the virtual desktop is in the corporate network, so data and applications do not leave the network. Users only receive the display from the desktop, making DaaS more secure than a VPN connection. This also means it's less reliant on internet connection. Protocols such as Citrix HDX, VMware View and Microsoft RDP are optimized, sending the user only the part of the screen that is updated, and they can scale in quality.

A disconnect while updating a database is not a problem with DaaS. The virtual desktop in the data center is talking to the database. Any disruptions in connectivity between the virtual desktop server and endpoint device will not affect the database update on the server side. When users sign back in after an interruption, they will be back at the same point where they left their sessions. This also allows users to switch between personal devices. Users can start a session on a PC, then disconnect in the morning and pick the session back up in the afternoon on a laptop.

The most significant factors in deciding between DaaS or a VPN are scale and security. If an organization needs to make one application on the corporate network available for end users from the internet, for example, DaaS might be too complex, as it creates an entire virtual desktop for each user. But if security is an organization's main concern, then DaaS might be the best option, even for just one application.

Organizations should also keep in mind that DaaS and VPN are both technologies for legacy applications and data. For example, if an organization migrates all of its data to a cloud platform such as SharePoint in Microsoft 365, users can automatically access the data through the internet without DaaS or VPN. This is also the case with SaaS applications. Additionally, web applications on the corporate network can be made internet-facing with an authenticated application proxy.

Read more from the original source:
What are the key differences between DaaS and VPN? - TechTarget

Read More..

3 Things About Matterport That Smart Investors Know – The Motley Fool

Matterport (MTTR -3.37%), a developer of 3D scanning software that creates "digital twins" of physical places, went public last July by merging with a special purpose acquisition (SPAC) company. After hitting an all-time high of $33.05 in November, its stock has since plummeted by more than 80% as investors fret over the company's slowing growth and widening losses.

I recently compared the bear and bull cases for Matterport and concluded that the bears would remain in charge until it scales up its business. But today, I want to focus on three lesser-known facts about this divisive company.

Image source: Getty Images.

In 2021, Matterport generated 71% of its revenue from its subscriptions, licenses, and services. The remaining 29% came from its products segment, which generates most of its revenue from its Pro2 3D Camera, which starts at $3,395.

However, Matterport also recently started to sell third-party 3D cameras alongside its pricey Pro2 3D, and it now provides 3D capture apps for iPhones and Android devices. It claims that selling cheaper third-party cameras and reaching more users with its mobile apps will drive increased adoption of its primary software solutions over the long term.

That might be true, but it also raises a troubling question about product cannibalization: Why would customers buy Matterport's expensive cameras when they can buy cheaper devices or simply use their phones?

Matterport's product revenue declined 2% last year to $32.5 million, but the segment's cost of revenue jumped 30% to $26.4 million. Those figures indicate that it might be smarter for the company to phase out its first-party cameras and let its mobile apps and third-party devices do the heavy lifting.

But opening up its platform to more devices could also erode its defenses against competitors like Zillow Group, which provides its own 3D scanning platform, and start-ups like EyeSpy360, Cupix, and Easypano. Therefore, while the gross margins for Matterport's first-party camera business could keep getting squeezed, it's unclear if it will ever exit the segment.

After a user scans a space with Matterport's software, the digital model is uploaded to its cloud-based platform, where it can't be accessed without a subscription. The growth of that platform at first glance looks impressive: The number of spaces under management rose 56% to 6.7 million in 2021, and the number of subscribers rose 98% to 503,000.

However, as of the end of the year, 448,000 of those subscribers were still using Matterport's free tier, which gives users access to one digital twin. That's a 113% increase from 2020. The number of paid users, who pay between $10 to $689 each month to access between five to 300 digital models, increased just 25% to 55,000.

Matterport believes it can convert more of those free users to paid users over time. But until and unless it demonstrates that to be true, it will be burdened with the cloud hosting costs for all of the free data associated with those accounts. That's probably why the company expects its adjusted net loss to more than double this year.

In its latest 10-K filing, Matterport admits that a lot of its smaller rivals in the spatial data market still suffer from "limited funding," and that "poor experiences" with those competing services could "hamper consumer confidence in the spatial data market and adoption or trust in providers."

At the same time, the company expects some of those competitors to be "acquired by third parties with greater resources." That strongly implies that tech giants like Apple and Alphabet, which have already integrated 3D scanning and augmented-reality features into their mobile operating systems, could threaten Matterport's long-term growth.

On the bright side for shareholders, Matterport could also become a takeover target.

Last year, Matterport made a longer-term forecast that it could generate $747 million in revenue in 2025. But in the wake of its slowdown in 2021, it would need to grow its top line at a compound annual rate of 61% from here to hit that target. Analysts expect its revenue to rise just 17% in 2022, and anticipate that it could accelerate to 49% growth in 2023 if it converts more free users to paid ones and overcomes its supply chain challenges.

But trading at 13 times this year's sales, Matterport's stock still isn't cheap enough to be considered undervalued. Cloud communications company Twilio (TWLO -5.97%), which expects more than 30% organic annualized sales growth over the next few years, trades at just 6 times this year's sales. Palantir (PLTR -5.02%), a data-mining firm, is targeting more than 30% revenue growth through 2025 and trades at 12 times this year's sales.

Simply put, Matterport's stock price could be cut in half again in this challenging market for growth stocks before it would be reasonable to consider it a value play. Investors should exercise caution and read the fine print before assuming that Matterport will permanently change how people virtually visit real-life places.

Excerpt from:
3 Things About Matterport That Smart Investors Know - The Motley Fool

Read More..

Cloud Computing in Government Market 2022: Industry Analysis, Opportunities, Demand, Top Players and Growth Forecast 2029 |Dell Technologies,…

California (United States) The Cloud Computing in Government Market Research Report is a professional asset that provides dynamic and statistical insights into regional and global markets. It includes a comprehensive study of the current scenario to safeguard the trends and prospects of the market. Cloud Computing in Government Research reports also track future technologies and developments. Thorough information on new products, and regional and market investments is provided in the report. This Cloud Computing in Government research report also scrutinizes all the elements businesses need to get unbiased data to help them understand the threats and challenges ahead of their business. The Service industry report further includes market shortcomings, stability, growth drivers, restraining factors, and opportunities over the forecast period.

Cloud computing offers government agencies more flexibility than traditional IT infrastructures. With a cloud service provider, you no longer have to worry about limited resources, purchasing and hosting servers and hardware, updating software, or protecting data. Government organizations recognize the benefits of information technology to increase operational efficiency. They also focus on reducing the cost of IT ownership through cloud computing. Thus, efficient service delivery capability and cost savings are key drivers of cloud computing adoption in government.

Get Sample Report with Table and Graphs:

https://www.a2zmarketresearch.com/sample-request/621755

Top Companies in this report are:

Dell Technologies, Salesforce, Ellucian, Amazon.com, IBM, CampusWorks, Alphabet, Microsoft, Oracle, HPE, Oracle, IBM, Microsoft, Cisco Systems, Amazon Web Services, Google, .

Cloud Computing in Government Market Overview:

This systematic research study provides an inside-out assessment of the Cloud Computing in Government market while proposing significant fragments of knowledge, chronic insights and industry-approved and measurably maintained Service market conjectures. Furthermore, a controlled and formal collection of assumptions and strategies was used to construct this in-depth examination.

During the development of this Cloud Computing in Government research report, the driving factors of the market are investigated. It also provides information on market constraints to help clients build successful businesses. The report also addresses key opportunities.

Global Cloud Computing in Government Market Segmentation:

Market Segmentation: By Type

HardwareSoftwareServices

Market Segmentation: By Application

FinancialTrafficOther

Report overview:

* The report analyses regional growth trends and future opportunities.

* Detailed analysis of each segment provides relevant information.

* The data collected in the report is investigated and verified by analysts.

* This report provides realistic information on supply, demand and future forecasts.

Get Special Discount:

https://www.a2zmarketresearch.com/discount/621755

This report provides an in-depth and broad understanding of Cloud Computing in Government. With accurate data covering all the key features of the current market, the report offers extensive data from key players. An audit of the state of the market is mentioned as accurate historical data for each segment is available during the forecast period. Driving forces, restraints, and opportunities are provided to help provide an improved picture of this market investment during the forecast period 2022-2029.

Some essential purposes of the Cloud Computing in Government market research report:

oVital Developments: Custom investigation provides the critical improvements of the Cloud Computing in Government market, including R&D, new item shipment, coordinated efforts, development rate, partnerships, joint efforts, and local development of rivals working in the market on a global scale and regional.

oMarket Characteristics:The report contains Cloud Computing in Government market highlights, income, limit, limit utilization rate, value, net, creation rate, generation, utilization, import, trade, supply, demand, cost, part of the industry in general, CAGR and gross margin. Likewise, the market report offers an exhaustive investigation of the elements and their most recent patterns, along with Service market fragments and subsections.

oInvestigative Tools:This market report incorporates the accurately considered and evaluated information of the major established players and their extension into the Cloud Computing in Government market by methods. Systematic tools and methodologies, for example, Porters Five Powers Investigation, Possibilities Study, and numerous other statistical investigation methods have been used to analyze the development of the key players working in the Cloud Computing in Government market.

oConvincingly, the Cloud Computing in Government report will give you an unmistakable perspective on every single market reality without the need to allude to some other research report or source of information. This report will provide all of you with the realities about the past, present, and eventual fate of the Service market.

Buy Exclusive Report: https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

sales@a2zmarketresearch.com

+1 775 237 4147

Follow this link:
Cloud Computing in Government Market 2022: Industry Analysis, Opportunities, Demand, Top Players and Growth Forecast 2029 |Dell Technologies,...

Read More..

Machine-learning to speed up treatment of brain injury – Cosmos

A team of data scientists from the University of Pittsburgh School of Medicine in the US, and neurotrauma surgeons from the University of Pittsburgh Medical Centre, has developed the first automated brain scans and machine-learning techniques to inform outcomes for patients who have severe traumatic brain injuries.

The advanced machine-learning algorithm can analyse vast volumes of data from brain scans and relevant clinical data from patients. The researchers found that the algorithm was able to quickly and accurately produce a prognosis up to six months after injury. The sheer amount of data examined and the speed with which it is analysed is simply not possible for a human clinician, the researchers say.

More on machine learning in medicine: Are machine-learning tools the future of healthcare?

Publishing their results this week in Radiology, the scientists new predictive algorithm has been validated across two independent patient cohorts.

Co-senior author of the paper Shandong Wu, associate professor of radiology, bioengineering and biomedical informatics at University of Pittsburgh in the US, is an expert at using machine learning in medicine. The researchers used a hybrid model machine-learning framework using deep learning and traditional machine learning, processing CT imaging data and clinical non-imaging data for severe traumatic brain injury patient outcome prediction, he tells Cosmos.

Wu says the team used data from the University of Pittsburgh Medical Center (UPMC) and another 18 institutions from around the US. By using the machine learning model when the patient is admitted early in the emergency room, were able to build a model that can automatically predict favourable or unfavourable outcome or the mortality or the other recovery potential, he says.

Get an update of science stories delivered straight to your inbox.

We find our model maintains prediction performance, which shows our model is capturing some critical information to be able to provide that kind of prediction.

Co-senior author Dr David Okonkwo, a professor of neurological surgery at the University of Pittsburgh and a practising neurosurgeon, also spoke with Cosmos. After presenting the same data to a small group of neurosurgeons, Okonkwo says the machine learning model significantly outperformed human judgment and experience.

The success of the first model, based on specific data sets from within the first few hours of the injury, is extremely encouraging and telling us that were on the right path here to build tools that can complement human clinical judgment to make the best decisions for patients, says Okonkwo. But the researchers believe it can be made more powerful and accurate.

The first three-day window is very critical for better or for worse for patients with severe traumatic brain injuries. The most common reason for someone to die in the hospital after a traumatic brain injury is because of withdrawal of life-sustaining therapy, and this most commonly happens within the first 72 hours, Okonkwo says.

If we can build a model that is based off of that first three days worth of information, we think that we can put clinicians in a better place to identify the patients that have a chance at a meaningful recovery.

The study is one of many using machine learning in different areas of medicine, says Wu. There are tons of new leading research in the past couple of years, using all kinds of imaging or clinical data and machine learning or deep learning to address many other medical issues, diseases or conditions, he says.

Our study as on top of that, another strong study showing, you know, critical care and severe trauma and brain injury population, how our techniques or how deep learning can provide more information, or additional tools to help physicians like David here to provide improved care to patients. Okonkwo says machine-learning tools are intended not to replace human clinical or human judgment, but to complement human clinical decision making.

Read the original here:
Machine-learning to speed up treatment of brain injury - Cosmos

Read More..

Deep learning is bridging the gap between the digital and the real world – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Algorithms have always been at home in the digital world, where they are trained and developed in perfectly simulated environments. The current wave of deep learning facilitates AIs leap from the digital to the physical world. The applications are endless, from manufacturing to agriculture, but there are still hurdles to overcome.

To traditional AI specialists, deep learning (DL) is old hat. It got its breakthrough in 2012 when Alex Krizhevsky successfully deployed convolutional neural networks, the hallmark of deep learning technology, for the first time with his AlexNet algorithm. Its neural networks that have allowed computers to see, hear and speak. DL is the reason we can talk to our phones and dictate emails to our computers. Yet DL algorithms have always played their part in the safe simulated environment of the digital world. Pioneer AI researchers are working hard to introduce deep learning to our physical, three-dimensional world. Yep, the real world.

Deep learning could do much to improve your business, whether you are a car manufacturer, a chipmaker or a farmer. Although the technology has matured, the leap from the digital to the physical world has proven to be more challenging than many expected. This is why weve been talking about smart refrigerators doing our shopping for years, but no one actually has one yet. When algorithms leave their cozy digital nests and have to fend for themselves in three very real and raw dimensions there is more than one challenge to be overcome.

The first problem is accuracy. In the digital world, algorithms can get away with accuracies of around 80%. That doesnt quite cut it in the real world. If a tomato harvesting robot sees only 80% of all tomatoes, the grower will miss 20% of his turnover, says Albert van Breemen, a Dutch AI researcher who has developed DL algorithms for agriculture and horticulture in The Netherlands. His AI solutions include a robot that cuts leaves of cucumber plants, an asparagus harvesting robot and a model that predicts strawberry harvests. His company is also active in the medical manufacturing world, where his team created a model that optimizes the production of medical isotopes. My customers are used to 99.9% accuracy and they expect AI to do the same, Van Breemen says. Every percent of accuracy loss is going to cost them money.

To achieve the desired levels, AI models have to be retrained all the time, which requires a flow of constantly updated data. Data collection is both expensive and time-consuming, as all that data has to be annotated by humans. To solve that challenge Van Breemen has outfitted each of his robots with functionality that lets it know when it is performing either well or badly. When making mistakes the robots will upload only the specific data where they need to improve. That data is collected automatically across the entire robot fleet. So instead of receiving thousands of images, Van Breemens team only gets a hundred or so, that are then labeled and tagged and sent back to the robots for retraining. A few years ago everybody said that data is gold, he says. Now we see that data is actually a huge haystack hiding a nugget of gold. So the challenge is not just collecting lots of data, but the right kind of data.

His team has developed software that automates the retraining of new experiences. Their AI models can now train for new environments on their own, effectively cutting out the human from the loop. Theyve also found a way to automate the annotation process by training an AI model to do much of the annotation work for them. Van Breemen: Its somewhat paradoxical because you could argue that a model that can annotate photos is the same model I need for my application. But we train our annotation model with a much smaller data size than our goal model. The annotation model is less accurate and can still make mistakes, but its good enough to create new data points we can use to automate the annotation process.

The Dutch AI specialist sees a huge potential for deep learning in the manufacturing industry, where AI could be used for applications like defect detection and machine optimization. The global smart manufacturing industry is currently valued at 198 billion dollars and has a predicted growth rate of 11% until 2025. The Brainport region around the city of Eindhoven where Van Breemens company is headquartered is teeming with world-class manufacturing corporates, such as Philips and ASML. (Van Breemen has worked for both companies in the past.)

A second challenge of applying AI in the real world is the fact that physical environments are much more varied and complex than digital ones. A self-driving car that is trained in the US will not automatically work in Europe with its different traffic rules and signs. Van Breemen faced this challenge when he had to apply his DL model that cuts cucumber plant leaves to a different growers greenhouse. If this took place in the digital world I would just take the same model and train it with the data from the new grower, he says. But this particular grower operated his greenhouse with LED lighting, which gave all the cucumber images a bluish-purple glow our model didnt recognize. So we had to adapt the model to correct for this real-world deviation. There are all these unexpected things that happen when you take your models out of the digital world and apply them to the real world.

Van Breemen calls this the sim-to-real gap, the disparity between a predictable and unchanging simulated environment and the unpredictable, ever-changing physical reality. Andrew Ng, the renowned AI researcher from Stanford and cofounder of Google Brain who also seeks to apply deep learning to manufacturing, speaks of the proof of concept to production gap. Its one of the reasons why 75% of all AI projects in manufacturing fail to launch. According to Ng paying more attention to cleaning up your data set is one way to solve the problem. The traditional view in AI was to focus on building a good model and let the model deal with noise in the data. However, in manufacturing a data-centric view may be more useful, since the data set size is often small. Improving data will then immediately have an effect on improving the overall accuracy of the model.

Apart from cleaner data, another way to bridge the sim-to-real gap is by using cycleGAN, an image translation technique that connects two different domains, made popular by aging apps like FaceApp. Van Breemens team researched cycleGAN for its application in manufacturing environments. The team trained a model that optimized the movements of a robotic arm in a simulated environment, where three simulated cameras observed a simulated robotic arm picking up a simulated object. They then developed a DL algorithm based on cycleGAN that translated the images from the real world (three real cameras observing a real robotic arm picking up a real object) to a simulated image, which could then be used to retrain the simulated model. Van Breemen: A robotic arm has a lot of moving parts. Normally you would have to program all those movements beforehand. But if you give it a clearly described goal, such as picking up an object, it will now optimize the movements in the simulated world first. Through cycleGAN you can then use that optimization in the real world, which saves a lot of man-hours. Each separate factory using the same AI model to operate a robotic arm would have to train its own cycleGAN to tweak the generic model to suit its own specific real-world parameters.

The field of deep learning continues to grow and develop. Its new frontier is called reinforcement learning. This is where algorithms change from mere observers to decision-makers, giving robots instructions on how to work more efficiently. Standard DL algorithms are programmed by software engineers to perform a specific task, like moving a robotic arm to fold a box. A reinforcement algorithm could find out there are more efficient ways to fold boxes outside of their preprogrammed range.

It was reinforcement learning (RL) that made an AI system beat the worlds best Go player back in 2016. Now RL is also slowly making its way into manufacturing. The technology isnt mature enough to be deployed just yet, but according to the experts, this will only be a matter of time.

With the help of RL, Albert Van Breemen envisions optimizing an entire greenhouse. This is done by letting the AI system decide how the plants can grow in the most efficient way for the grower to maximize profit. The optimization process takes place in a simulated environment, where thousands of possible growth scenarios are tried out. The simulation plays around with different growth variables like temperature, humidity, lighting and fertilizer, and then chooses the scenario where the plants grow best. The winning scenario is then translated back to the three-dimensional world of a real greenhouse. The bottleneck is the sim-to-real gap, Van Breemen explains. But I really expect those problems to be solved in the next five to ten years.

As a trained psychologist I am fascinated by the transition AI is making from the digital to the physical world. It goes to show how complex our three-dimensional world really is and how much neurological and mechanical skill is needed for simple actions like cutting leaves or folding boxes. This transition is making us more aware of our own internal, brain-operated algorithms that help us navigate the world and which have taken millennia to develop. Itll be interesting to see how AI is going to compete with that. And if AI eventually catches up, Im sure my smart refrigerator will order champagne to celebrate.

Bert-Jan Woertman is the director of Mikrocentrum.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See the article here:
Deep learning is bridging the gap between the digital and the real world - VentureBeat

Read More..

Top 10 Artificial Intelligence Repositories on GitHub – Analytics Insight

Take a look at the top 10 artificial intelligence repositories on Github.GitHub

GitHub has become increasingly popular in no time. This is one of the most popular platforms for coders and developers to host and share codes in a cooperative and collaborative environment. GitHub boasts millions of repositories in various domains. In this article, we will throw light on the top 10 artificial intelligence repositories on GitHub. Have a look!

TensorFlow has gained wide recognition as an open-source framework for Machine learning and Artificial Intelligence. This GitHub repository was developed by Google Brain Team and contains various resources to learn. With the state-of-the-art models for computer vision, NLP, and recommendation systems, you are bound to generate highly accurate results on their datasets.

This is a lightweight TensorFlow-based network that is used for automatically learning high-quality models with the least expert interference. This AI repository on GitHub boasts easy usability, flexibility, speed, and a guarantee of learning.

BERT (Bidirectional Encoder Representations from Transformers) is the first unsupervised, deeply bidirectional system for pre-training NLP. Evidently enough, this AI repository contains TensorFlow code and pre-trained models for BERT, aimed at obtaining new state-of-the-art results on a significant number of NLP tasks.

This Artificial intelligence repository focuses majorly on data processing. However, a point that is worth a mention is that Airflow has the opinion that tasks should ideally be idempotent. In simple terms, the results of the task will be the same, and will not create duplicated data in a destination system

This is a beginner-level AI GitHub repository that evidently emphasises document similarity. The idea behind the document similarity application is to find the common topic discussed between the documents.

AI Learning is yet another most widely relied upon AI GitHub repository that consists of many lessons such as Machine Learning (ML), Deep Learning (DL), and Natural Language Processing, to name a few.

This GitHub repository is an exclusive Machine Learning sub-repository that contains various algorithms coded exclusively in Python. Here, you get codes on several regression techniques such as linear and polynomial regression. This repository finds immense application in predictive analysis for continuous data.

This AI repository on GitHub is widely recognized across the globe as it contains classification, regression, and clustering algorithms, as well as data-preparation and model-evaluation tools. Can it get any better than this?

This GitHub repository has an organized list of machine learning libraries, frameworks, and tools in almost all the languages available. All in all, Awesome Machine Learning promotes a collective development environment for Machine Learning.

spaCy is a library foradvanced Natural Language Processingin Python. spaCy is that one repository that is built on the very latest research and was designed from day one to be used in real products.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Follow this link:
Top 10 Artificial Intelligence Repositories on GitHub - Analytics Insight

Read More..

Your AI can’t tell you it’s lying if it thinks it’s telling the truth. That’s a problem – The Register

Opinion Machine learning's abiding weakness is verification. Is your AI telling the truth? How can you tell?

This problem isn't unique to ML. It plagues chip design, bathroom scales, and prime ministers. Still, with so many new business models depending on AI's promise to bring the holy grail of scale to real-world data analysis, this lack of testability has new economic consequences.

The basic mechanisms of machine learning are sound, or at least statistically reliable. Within the parameters of its training data, an ML process will deliver what the underlying mathematics promise. If you understand the limits, you can trust it.

But what if there's a backdoor, a fraudulent tweak of that training data set which will trigger misbehavior? What if there's a particular quirk in someone's loan request submitted at exactly 00:45 on the 5th and the amount requested checksums to 7 that triggers automatic acceptance, regardless of risk?

Like an innocent assassin unaware they'd had a kill word implanted under hypnosis, your AI would behave impeccably until the bad guys wanted it otherwise.

Intuitively, we know that's a possibility. Now it has been shown mathematically that not only can this happen, researchers say, it's not theoretically detectable. An AI backdoor exploit engineered through training is not only just as much a problem as a traditionally coded backdoor, it's not amenable to inspection or version-on-version comparison or, indeed, anything. As far as the AI's concerned, everything is working perfectly, Harry Palmer could never confess to wanting to shoot JFK, he had no idea he did.

The mitigations suggested by researchers aren't very practical. Complete transparency of training data and process between AI company and client is a nice idea, except that the training data is the company's crown jewels and if they're fraudulent, how does it help?

At this point, we run into another much more general tech industry weakness, the idea that you can always engineer a singular solution to a particular problem. Pay the man, Janet, and let's go home. That doesn't work here; computer says no is one thing, mathematics says no quite another. If we carry on assuming that there'll be a fix akin to a patch, some new function that makes future AIs resistant to this class of fraud, we will be defrauded.

Conversely, the industry does genuinely advance once fundamental flaws are admitted and accepted, and the ecosystem itself changes in recognition.

AI has an ongoing history of not working as well as we thought, and it's not just this or that project. For example, an entire sub-industry has evolved to prove you are not a robot. Using its own trained robots to silently watch you as you move around online. If these machine monitors deem you too robotic, they spring a Voight-Kampff test on you in the guise of a Completely Automated Public Turing test to tell Computers and Humans Apart more widely known, and loathed, as a Captcha. You then have to pass a quiz designed to filter out automata. How undignified.

Do they work? It's still economically viable for the bad guys to carry on producing untold millions of programmatic fraudsters intent on deceiving the advertising industry, so that's a no on the false positives. And it's still common to be bounced from a login because your eyes aren't good enough, or the question too ambiguous, or the feature you relied on has been taken away. Not being able to prove you are not a robot doesn't get you shot by Harrison Ford, at least for now, but you may not be able to get into eBay.

The answer here is not to build a "better" AI and feed it with more and "better" surveillance signals. It's to find a different model to identify humans online, without endangering their privacy. That's not going to be a single solution invented by a company, that's an industry-wide adoption of new standards, new methods.

Likewise, you will never be able to buy a third-party AI that is testably pure of heart. To tell the truth, you'll never be able to build one yourself, at least not if you've got a big enough team or a corporate culture where internal fraud can happen. That's a team of two or more, and any workable corporate culture yet invented.

That's OK, once you stop looking for that particular unicorn. We can't theoretically verify non-trivial computing systems of any kind. When we have to use computers where failure is not an option, like flying aircraft or exploring space, we use multiple independent systems and majority voting.

If it seems that building a grand scheme on the back of the "perfect" black box works as badly as designing a human society on the model of the perfectly rational human, congratulations. Handling the complexities of real world data at real world scale means accepting that any system is fallible in ways that can't be patched or programmed out of. We're not at the point where AI engineering is edging into AI psychology, but it's coming.

Meanwhile, there's no need to give up on your AI-powered financial fraud detection. Buy three AIs from three different companies. Use them to check each other. If one goes wonky, use the other two until you can replace the first.

Can't afford three AIs? You don't have a workable business model. At least AI is very good at proving that.

More:
Your AI can't tell you it's lying if it thinks it's telling the truth. That's a problem - The Register

Read More..