Page 3,818«..1020..3,8173,8183,8193,820..3,8303,840..»

How artificial intelligence outsmarted the superbugs – The Guardian

One of the seminal texts for anyone interested in technology and society is Melvin Kranzbergs Six Laws of Technology, the first of which says that technology is neither good nor bad; nor is it neutral. By this, Kranzberg meant that technologys interaction with society is such that technical developments frequently have environmental, social and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.

The saloon-bar version of this is that technology is both good and bad; it all depends on how its used a tactic that tech evangelists regularly deploy as a way of stopping the conversation. So a better way of using Kranzbergs law is to ask a simple Latin question: Cui bono? who benefits from any proposed or hyped technology? And, by implication, who loses?

With any general-purpose technology which is what the internet has become the answer is going to be complicated: various groups, societies, sectors, maybe even continents win and lose, so in the end the question comes down to: who benefits most? For the internet as a whole, its too early to say. But when we focus on a particular digital technology, then things become a bit clearer.

A case in point is the technology known as machine learning, a manifestation of artificial intelligence that is the tech obsession de nos jours. Its really a combination of algorithms that are trained on big data, ie huge datasets. In principle, anyone with the computational skills to use freely available software tools such as TensorFlow could do machine learning. But in practice they cant because they dont have access to the massive data needed to train their algorithms.

This means the outfits where most of the leading machine-learning research is being done are a small number of tech giants especially Google, Facebook and Amazon which have accumulated colossal silos of behavioural data over the last two decades. Since they have come to dominate the technology, the Kranzberg question who benefits? is easy to answer: they do. Machine learning now drives everything in those businesses personalisation of services, recommendations, precisely targeted advertising, behavioural prediction For them, AI (by which they mostly mean machine learning) is everywhere. And it is making them the most profitable enterprises in the history of capitalism.

As a consequence, a powerful technology with great potential for good is at the moment deployed mainly for privatised gain. In the process, it has been characterised by unregulated premature deployment, algorithmic bias, reinforcing inequality, undermining democratic processes and boosting covert surveillance to toxic levels. That it doesnt have to be like this was vividly demonstrated last week with a report in the leading biological journal Cell of an extraordinary project, which harnessed machine learning in the public (as compared to the private) interest. The researchers used the technology to tackle the problem of bacterial resistance to conventional antibiotics a problem that is rising dramatically worldwide, with predictions that, without a solution, resistant infections could kill 10 million people a year by 2050.

The team of MIT and Harvard researchers built a neural network (an algorithm inspired by the brains architecture) and trained it to spot molecules that inhibit the growth of the Escherichia coli bacterium using a dataset of 2,335 molecules for which the antibacterial activity was known including a library of 300 existing approved antibiotics and 800 natural products from plant, animal and microbial sources. They then asked the network to predict which would be effective against E coli but looked different from conventional antibiotics. This produced a hundred candidates for physical testing and led to one (which they named halicin after the HAL 9000 computer from 2001: A Space Odyssey) that was active against a wide spectrum of pathogens notably including two that are totally resistant to current antibiotics and are therefore a looming nightmare for hospitals worldwide.

There are a number of other examples of machine learning for public good rather than private gain. One thinks, for example, of the collaboration between Google DeepMind and Moorfields eye hospital. But this new example is the most spectacular to date because it goes beyond augmenting human screening capabilities to aiding the process of discovery. So while the main beneficiaries of machine learning for, say, a toxic technology like facial recognition are mostly authoritarian political regimes and a range of untrustworthy or unsavoury private companies, the beneficiaries of the technology as an aid to scientific discovery could be humanity as a species. The technology, in other words, is both good and bad. Kranzbergs first law rules OK.

Every cloud Zeynep Tufekci has written a perceptive essay for the Atlantic about how the coronavirus revealed authoritarianisms fatal flaw.

EU ideas explained Politico writers Laura Kayali, Melissa Heikkil and Janosch Delcker have delivered a shrewd analysis of the underlying strategy behind recent policy documents from the EU dealing with the digital future.

On the nature of loss Jill Lepore has written a knockout piece for the New Yorker under the heading The lingering of loss, on friendship, grief and remembrance. One of the best things Ive read in years.

See the original post here:
How artificial intelligence outsmarted the superbugs - The Guardian

Read More..

4 different applications of machine learning that are revolutionizing society as we know it – KnowTechie

Machine Learning (ML) is an up-and-coming concept in the field of artificial intelligence. It involves a combination of algorithms and models that computers use to carry out specific tasks.

Under ML, computers dont need explicit instructions to perform the tasks that humans want them to. Instead, they use sample data to make predictions or decisions without being programmed to carry out an assignment.

This revolutionary concept is used in all types of industries. Its changing the way we use the Internet. It has also altered how we conduct business, as companies like BairesDev create custom software so businesses can take advantage of ML. Check out the following 4 different applications of machine learning that are revolutionizing our society.

Most people arent strangers to the luxury of virtual personal assistants. While some of these technologies havent been released within the last year, they are continually being updated to meet human needs.

Machine learning is an important element of virtual personal assistants. ML allows these assistants to better collect information that a user provides them. Later on, a user can view results from a virtual assistant that is more customized to them.

Virtual assistants are built into platforms, including the Google Home and Amazon Echo smart speakers. Consumers also have access to these assistants on their smartphones. Virtual assistants like Siri and Bixby all allow you to navigate your phone via voice commands.

Machine learning in manufacturing increases both production speed and workforce productivity. By incorporating ML into manufacturing machines, companies have lowered downtime and overall labor costs.

One company thats using ML in its manufacturing process is General Electric. General Electric manufactures a variety of products ranging from home appliances to large industrial equipment. The company uses a Brilliant Manufacturing Suite to link every part of the manufacturing process into one global system.

GE has over 500 factories located in countries all around the world. The company still has a long way to go in converting them all into smart factories, but its taking large strides to get there. Some other manufacturing companies getting on board with ML include Siemens, KUKA, and Fanuc.

We all have used social media at one point or another. Its addictive nature tends to rein us in for hours at a time. Social media captures users attention through machine learning. ML lets Facebook and other platforms customize your news feed and display effective ads.

Another example of ML on social media is the use of facial recognition. When you upload a picture, and Facebook recognizes your friends faces, machine learning is in effect. From there, Facebook will use facial recognition to connect you with others on the platform. This leads to better overall user experience.

The final application of ML we will discuss is its presence in online customer support chats. Not all companies want to hire a live person to answer customer inquiries. It requires time and resources to train someone to become an expert on all aspects of a company.

A popular alternative has become the implementation of live chatbots. These bots extract website content via machine learning. From there, they use the information to answer customers live questions. With time, chatbots improve the quality of their answers. Their versatile algorithms help them better understand customers questions as they answer more of them.

There are many more practical applications of machine learning to be discovered. However, we hope that this list has opened your eyes to this field of AI. As you have witnessed, ML can improve the quality of our work and social lives! Its a fascinating concept that has a lot of room for growth.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to ourTwitterorFacebook.

View post:
4 different applications of machine learning that are revolutionizing society as we know it - KnowTechie

Read More..

Demystifying the world of deep networks – MIT News

Introductory statistics courses teach us that, when fitting a model to some data, we should have more data than free parameters to avoid the danger of overfitting fitting noisy data too closely, and thereby failing to fit new data. It is surprising, then, that in modern deep learning the practice is to have orders of magnitude more parameters than data. Despite this, deep networks show good predictive performance, and in fact do better the more parameters they have. Why would that be?

It has been known for some time that good performance in machine learning comes from controlling the complexity of networks, which is not just a simple function of the number of free parameters. The complexity of a classifier, such as a neural network, depends on measuring the size of the space of functions that this network represents, with multiple technical measures previously suggested: VapnikChervonenkis dimension, covering numbers, or Rademacher complexity, to name a few. Complexity, as measured by these notions, can be controlled during the learning process by imposing a constraint on the norm of the parameters in short, on how big they can get. The surprising fact is that no such explicit constraint seems to be needed in training deep networks. Does deep learning lie outside of the classical learning theory? Do we need to rethink the foundations?

In a new Nature Communications paper, Complexity Control by Gradient Descent in Deep Networks, a team from the Center for Brains, Minds, and Machines led by Director Tomaso Poggio, the Eugene McDermott Professor in the MIT Department of Brain and Cognitive Sciences, has shed some light on this puzzle by addressing the most practical and successful applications of modern deep learning: classification problems.

For classification problems, we observe that in fact the parameters of the model do not seem to converge, but rather grow in size indefinitely during gradient descent. However, in classification problems only the normalized parameters matter i.e., the direction they define, not their size, says co-author and MIT PhD candidate Qianli Liao. The not-so-obvious thing we showed is that the commonly used gradient descent on the unnormalized parameters induces the desired complexity control on the normalized ones.

We have known for some time in the case of regression for shallow linear networks, such as kernel machines, that iterations of gradient descent provide an implicit, vanishing regularization effect, Poggio says. In fact, in this simple case we probably know that we get the best-behaving maximum-margin, minimum-norm solution. The question we asked ourselves, then, was: Can something similar happen for deep networks?

The researchers found that it does. As co-author and MIT postdoc Andrzej Banburski explains, Understanding convergence in deep networks shows that there are clear directions for improving our algorithms. In fact, we have already seen hints that controlling the rate at which these unnormalized parameters diverge allows us to find better performing solutions and find them faster.

What does this mean for machine learning? There is no magic behind deep networks. The same theory behind all linear models is at play here as well. This work suggests ways to improve deep networks, making them more accurate and faster to train.

Go here to read the rest:
Demystifying the world of deep networks - MIT News

Read More..

Google recognizes machine learning and computer systems experts with Faculty Research Award – U of T Engineering News

U of T Engineering professors Scott Sanner (MIE) and Vaughn Betz (ECE) are among this years recipients of the Google Faculty Research Award. The program supports world-class research in computer science, engineering and related fields, and facilitates collaboration between researchers at Google and universities.

Only 15 per cent of applicants receive funding. This year, Google received more than 900 proposals from 50 countries and more than 330 universities worldwide.

Given the high selectivity of this program, it is a tremendous accomplishment for professors Sanner and Betz to receive Google Faculty Research Awards, says Professor Ramin Farnood, Vice-Dean of Research, U of T Engineering. It is a testament to the calibre of their work that they are being recognized amongst the very best institutions in the world.

Sanner joins a list of researchers from Stanford University, the Massachusetts Institute of Technology (MIT) and Carnegie Mellon University to be awarded in the Machine Learning category. His team will use the funding to develop more personalized and interactive conversational assistants by leveraging recent advances in deep learning.

Although Siri, Alexa and Google Assistant have become useful tools for consumers, Sanner points out that they currently do not provide highly personalized recommendations for questions such as, What movie should I see tonight?

These systems usually cant handle rich, natural language interactions like, Can you give me something a little lighter? in response to a recommendation to see Goodfellas, says Sanner.

Though it might seem that voice-based assistants are on the brink of achieving those capabilities, Sanner says its more complex than most imagine.

Personalized recommendations pose a style of interaction that is very different from the rule-based template and curated web-search technology that largely powers the existing conversational assistants of today.

Getting Siri or Alexa to understand how natural language in human interactions should influence future personalized recommendations means relying on machine learning and deep learning, as opposed to rules and web search.

To date, few researchers have investigated how these various technologies can dovetail to power interactive, conversation-based recommendations, adds Sanner.

For Betz who was awarded in the Systems category alongside researchers from Harvard University, University of Glasgow and Cornell University the funding will go towards making computer-aided design (CAD) tools to significantly speed up the programming and manufacturing of field-programmable gate arrays (FPGA).

FPGAs are computer chips that can be reprogrammed to be a large variety of circuits, and are used in thousands of todays electronic systems, from MRI machines to cellphone towers to automotive electronics.

As we continue to implement extremely complicated systems and larger designs in FPGAs, current CAD tools can take hours or even days to complete, causing major productivity bottlenecks for the engineers doing these designs, says Betz.

Betzs team is looking to not only make the CAD tools faster in producing FPGA chips, they want to ensure the tool is general enough that it can efficiently target a wide variety of chips. Their project, Verilog-to-Routing (VTR), will be open source to enable other researchers to build upon their infrastructure.

Faster tools lead to more productive engineers and hence, better electronic systems, says Betz.

Its great to receive this funding, he adds. I know Google funds a very wide variety of research, so it is very competitive. This award is a great validation of the project and helps us expand the scope of our work.

Visit link:
Google recognizes machine learning and computer systems experts with Faculty Research Award - U of T Engineering News

Read More..

America Must Shape the World’s AI Norms or Dictators Will – Defense One

Four former U.S. defense secretaries issue a warning about China and a wake-up call to Americans on artificial intelligence.

As Secretaries of Defense, we anticipated and addressed threats to our nation, sought strategic opportunities, exercised authority, direction, and control over the U.S. military, and executed many other tasks in order to protect the American people and our way of life. During our combined service leading the Department of Defense, we navigated historical inflection points the end of the Cold War and its aftermath, the War on Terror, and the reemergence of great powercompetition.

Now, based on our collective experience, we believe the development and application of artificial intelligence and machine learning will dramatically affect every part of the Department of Defense, and will play as prominent a role in our countrys future as the many strategic shifts we witnessed while inoffice.

The digital revolution is changing our society at an unprecedented rate.Nearly 60 years passed between the construction of the first railroads in the United States and the completion of the First Transcontinental Railroad.Smartphones were introduced just 20 years ago and have already changed how we manage our finances, connect with family members, and conduct our dailylives.

AI will have just as significant an impact on our national defense, and likely in even less time. Its effects, however, will extend beyond the military to the rest of American society. AI has already changed health care, business practices, and law enforcement, and its impact will onlyincrease.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

As this AI-driven transformation occurs, we must keep our democratic values firmly in mind and at the center of any dialogue about AI. Developers embed their values into their products whether they mean to or not. Social media platforms make tradeoffs between free speech and protection from harassment. Smartphone companies choose whether or not to develop operating systems that block the activation of cameras and microphones without users permission. Just as surely, AI developed by authoritarian governments and the companies they finance will reflect authoritarianvalues.

However, if designed with American values, AI can empower individuals by freeing them from mundane tasks, by increasing access to information, and by helping optimize decisions, all while respecting individual liberties and fundamental human rights. Weve seen examples of how other AI designs will empower governments, particularly authoritarian governments. Their AI designs value those in power more than their citizens, by amplifying sensors that monitor populations, valuing state access to data more than privacy, and creating automation that empowers central decision makers. Just as a highway system that centralizes major cities naturally increase the cities importance, societies using a network architecture and series of AI models with embedded authoritarian values will begin to reflect those samevalues.

Americans must ensure we deliberately and carefully embed our values into the technology that is already shaping our world. To do so, we need to lead the world in AI research and development, provide commercial and public systems to the world that reflect democratic values, and lead the global conversation on AI standards in concert with our allies, especially regarding AIs use inwar.

The American people need to play an active role by contacting their representatives, participating in public forums, and shaping private sector decisions. The DoD must invest in research and development in fields with few incentives for private sector investment. It must also lead the world in establishing standards for the ethical and safe use of AI by ensuring AI does not increase the risk of escalation and behaves as its users intend during militaryoperations.

The United States has allowed China to begin shaping the conversation about norms. The Defense Department this week issued ethics guidelines for artificial intelligence, but its only a start.If we do not correct this deficiency, we cannot guarantee that the technology that shapes the world our children and grandchildren will live in will reinforce rather than threaten the freedoms we haveenjoyed.

The government is already working to ensure the United States and our allies lead the world in the development and use of AI. Departments and agencies have launched critical AI initiatives.Congress is playing a prominent role by developing a broad array of legislative initiatives, including the creation of the National Security Commission on Artificial Intelligence. We urge our fellow Americans in academia, industry, and from across the country to educate themselves about whats at stake, and to work together to understand the opportunities and challenges associated with this emergingtechnology.

Our country entered the First World War to help decide the great question of that time: Will humanity make a world safe for democracy? We didnt seek a world filled with democracies, or even led by democracies, just a world safe for democracies to exist. Today weve come far, but we must not lose sight of the threats our country and its leaders understood more than a century ago: that the world is not a safe place for nations committed to individual liberties and other fundamental human rights by default, that global stability is not the norm, and that regimes that value their own power over the freedom and rights of their people would persist. AI will play an important role in every Americans future. If we do not lead its development and ensure it reflects our values, authoritarian regimes will ensure it reflects theirs.This is not just an issue of technology, it is an issue of nationalsecurity.

Go here to read the rest:
America Must Shape the World's AI Norms or Dictators Will - Defense One

Read More..

Brain wiring could be behind learning difficulties, say experts – The Guardian

Learning difficulties are not linked to differences in particular brain regions, but in how the brain is wired, research suggests.

According to figures from the Department for Education, 14.9% of all pupils in England about 1.3 million children had special educational needs in January 2019, with 271,200 having difficulties that required support beyond typical special needs provision. Dyslexia, attention deficit hyperactivity disorder (ADHD), autism and dyspraxia are among conditions linked to learning difficulties.

Now experts say different learning difficulties are not specific to particular diagnoses, nor are they linked to particular regions of the brain as has previously been thought. Instead the team, from the University of Cambridge, say learning difficulties appear to be associated with differences in the way connections in the brain are organised.

Dr Roma Siugzdaite, a co-author of the study, said it was time to rethink how children with learning difficulties were labelled.

We know that children with the same diagnoses can have very different profiles of problems, and our data suggest that this is because the labels we use do not map on to the reasons why children are struggling in other words, diagnoses do not map on to underlying neural differences, she said. Labelling difficulties is useful for practical reasons, and can be helpful for parents, but the current system is too simple.

Writing in the journal Current Biology, the team report how they made their discovery using a type of artificial intelligence called machine learning, which picks up on patterns within data.

The team drew on data from 479 children, 337 of whom had learning difficulties regarding performance in areas such as vocabulary, listening skills and problem-solving.

These data were presented to a machine learning system, which produced six chief categories reflecting the childrens cognitive abilities. The team found only 31% of children in the category reflecting the best performance were those with learning difficulties, while 97% of children in the category reflecting the poorest performance had learning difficulties.

Further work showed the system accurately assigned children into a wide range of categories relating to their cognitive abilities. However, the team found no link between these categories and particular diagnoses such as dyslexia, autism or ADHD.

Having particular diagnoses doesnt tell you about the kind of cognitive profile the children have, said Dr Duncan Astle, another author of the study.

Whilst diagnoses might be important, interventions should look beyond the label, he added, noting children with different diagnoses may benefit from similar interventions while those with the same diagnosis may need different forms of support.

The researchers then extracted information from brain scans of the children and fed it into a machine learning system. This generated 15 chief categories based on the structure of brain regions.

However, the team found that predictions of the cognitive abilities of a child were only about 4% better when based on their brain scans than by relying on guesswork alone.

There is a whole literature of people saying: This brain structure is related to this cognitive difficulty in kids who struggle, and this brain structure related to that cognitive difficulty, said Astle. However, he added, the new study suggested that was not the case.

The team then turned to another feature of the brain: its wiring. Using data from 205 children, the team found all showed similar efficiency of communication across the brain, with certain areas, known as hubs, showing many connections.

However, the children with learning difficulties showed different levels of connections in these hubs than those without. To explore whether this was important, the team turned to computer modelling, revealing the better the childrens cognitive abilities, the greater the drop in brain efficiency if the hubs were lost.

The hubbiness of a childs brain was a strong predictor of their cognitive profiles, said Siugzdaite . Childrens whose brains used hubs had higher cognitive abilities. We observed that in the case of the children who are struggling at school, they dont rely too much on these hubs.

Siugzdaite said the study raised further questions, including what biological or environmental factors could affect the development of such hubs, and whether some hubs were more important for particular cognitive skills.

However, the study has limitations, including that the team did not look at other issues, such as social behaviour, which may be linked to different diagnoses and brain structure.

Dr Tomoki Arichi from the Centre for the Developing Brain at Kings College London, who was not involved in the research, said the study added to a growing body of evidence that learning difficulties are better understood by looking at the skills people struggle with, rather than focusing on particular diagnoses.

Arichi said the research offered good evidence that how connections in the brain are organised is important in learning difficulties, but added: Understanding how this actually develops and then causes difficulties is still extremely complex, however. It is still possible that what they are seeing is a consequence rather than a cause, or is just a snapshot of an effect that is changing through childhood.

Read more:
Brain wiring could be behind learning difficulties, say experts - The Guardian

Read More..

Amazon Lightsail offers cheap WordPress hosting in the cloud – Coywolf News

Webmasters can host WordPress sites for as cheap as $3.50/mo using the Amazon Lightsail cloud platform. While Lightsail is marketed as easy-to-use, its still significantly more complicated to set up and maintain than using a managed hosting provider.

Amazon Web Service (AWS) introduced Lightsail in late 2016 to make it easier for webmasters to affordably launch sites in the cloud. The price to host a site run an instance originally started out at $5/mo but is now only $3.50/mo.

The price point makes Lightsail attractive to webmasters that want to leverage the benefits of using a cloud platform without paying for managed hosting. However, the price difference comes with a significant trade-off. Lightsail is easy-to-use in regards to using AWS, but its still complex for most webmasters to set up and maintain.

AWS provides excellent documentation on how to setup WordPress on Amazon Lightsail. This tutorial provides the key steps to setting up a WordPress instance. Theres also a video presentation by George Ellissaios, Senior Manager of Lightsail, that walks through the steps for managing WordPress on Amazon Lightsail.

Reading the tutorial or watching the video reveals that webmasters will need a high level of technical proficiency to set up a WordPress instance properly. For example, to setup a WordPress instance, webmasters need to be familiar with using Command Line Interface (CLI) via SSH. Its required for setting up the instance and also creating the SSL certificate. Additionally, creating the SSL certificate is a nine-step process, and requires a webmaster to manually update the certificate every 90 days.

The setup and maintenance required by Lightsail is in stark contrast to a managed hosting provider like WP Engine. For example, WP Engine creates WordPress instances with the push of a button, solely uses a Graphic User Interface (GUI) to configure the site, and provides automated maintenance, including the automated renewal of SSL certificates and daily backup snapshots.

Lightsail is perfect for webmasters and web developers that are comfortable using CLI and are familiar with managing AWS services. It provides a way for technically proficient users to launch and manage WordPress sites affordably.

Webmasters that dont have experience working with cloud platforms like AWS, Microsoft Azure, and Google Cloud will likely struggle to properly setup and maintain a WordPress instance on Lightsail.

An alternative to using AWS or a managed hosting platform is to use Cloudflares Worker Sites. Webmasters and web developers can host static versions of WordPress on Cloudflare without the complexity and maintenance required by Amazons Lightsail or the cost of managed hosting. The price is also comparable to Lightsails pricing.

For everyone else, managed hosting will be the most cost-effective way to host WordPress. Thats because its more affordable to pay a higher monthly fee for automated managed hosting than to pay someone to maintain cheap cloud computing instances.

Weekly news updates, no spam, and you can unsubscribe anytime.

Jon Henshaw

Jon is the founder and Managing Editor of Coywolf. He has over 25 years of experience in web development, SaaS, internet strategy, digital marketing, and entrepreneurship. Follow

Continue reading here:
Amazon Lightsail offers cheap WordPress hosting in the cloud - Coywolf News

Read More..

Northumberlands move to Oracle cloud apps hands control over from IT to end users – Diginomica

Its often the case that an organisation especially one in the public sector, facing ever-tightening purse strings needs a sharp jolt to invest in a major technology project. This was the case for Northumberland County Council when it came to finally moving off of Oracle E-Business Suite.

Northumberland had been using Oracle E-Business Suite since 2004, taking the organisation through the upgrade paths from 11i before moving to R12 in 2014. At the same time as the R12 upgrade, the local authority moved from hosting the suite on-premise to hosting it via an external supplier under a three-year deal, which took it up until September 2017.

At that point, the council was looking forward to being offered a reduction in hosting costs with their hosting supplier. But that wasnt the case, and instead the council was told they would face an increase in cost.

That proved the nudge the organisation needed to start looking at Oracle Cloud enterprise applications instead of hosting R12 E-Business Suite elsewhere. Ryan Fitzpatrick, Solutions Architect Manager at Northumberland County Council, explained:

Some of the drivers were because of the limitations on the current product, that was the starting point. The cost of the hosted solution; the functionality within the R12 instances was becoming dated; the reporting within that system there was a reliance on Discover, which had been unsupported; the overhead of patching we had to do within that environment; there was no external access, so that didnt fit in with the movement as an organisation wed been gradually moving to more agile working. So that was a driver, to have a system that could be accessible outside of the network.

There had also been a heavy reliance on customisation within the old product. As part of the potential move to cloud applications, the IT team was also planning to look at aligning with the business processes within those apps, as opposed to implementing another heavily customised system. Fitzpatrick said:

One of the things that came up quite early was, the mantra has to be configuration over customisation. Because wed had E-Business Suite for such a long time, thered been more and more customisation as we went along and that then needed more and more support and then would cause issues when you had patching.

There tends not to be a big drive to actually make that change when youre just continuing on the same product. We needed to use a bit more of a radical change to help get the focus on those business processes.

When the council signed the contract to move to the Oracle cloud applications, it had less than six months left on the hosted contract, which it knew wasnt going to be enough time for the whole project. To buy it the extra time needed, Northumberland signed a two-year contract to host its R12 instance on Oracles hosted infrastructure-as-a-service (IaaS). The R12 hosting switchover was done within three months - one of the most successful projects Fitzpatrick has been involved in at the council which then gave the opportunity to plan and carry out the deployment of the cloud applications.

The financials/ERP side of the project was completed within the timeframe, going live in November 2018. However, the HR and payroll side started to lag behind, due to a lack of support for areas like multiple assignments and different pension schemes and sickness policies within the new applications. Fitzpatrick explained:

Were actually still not quite live with Oracle Cloud Payroll. Weve had to extend the IaaS contract, which has incurred more cost. The payroll cloud offering wasnt fit for purpose for local government. There are some elements of the system that had been produced in R12, but it wasnt a like for like. We had assistance from our implementation partner to be able to produce some of those.

Its less of a problem now. Theres been a roadmap from Oracle. The quarterly releases are obviously addressing some of those payroll issues. Oracle might tell you different, but I would expect that some of those issues specific to local government may still exist. Within a local authority we have lots of staff who might have one, two, three, four or five different roles. Theyve always been things that weve had challenges with in R12, and they havent all been addressed in the cloud product.

Oracle Cloud Payroll is expected to be completed no later than June. Once its live, the council plans to concentrate on the HCM side of the system, including the use of self-service and manager self-service. Fitzpatrick noted there were elements of this available in R12, but it was very limited in what Northumberland used.

Despite the new applications having potential benefits across the entire organisation, covering core areas like financials, payroll and HR, the main drive for the cloud project was from the IT team rather than users themselves. However, the transition to cloud means there is now more involvement from end users.

Whilst we got business leads for each of the [E-Business Suite] modules, they only came together when we wanted to do a big upgrade process. Once it was live, they would go back into their day job and it would come back to IT to implement any enhancements. The nature of the cloud applications means everything has to be with the business service users, they have to take a bit more ownership, even if its just on the involvement with the quarterly releases. Theyve got to be looking at what new functionality is coming out.

Were only just coming to terms with this and its still a big challenge but thats the way that they would get benefits out of the move to cloud applications. If theyre thinking they would like to see some new functionality in my area, they should get first sight of it. Its not the IT section imposing that on them and theyve got to look to see how they can make that work.

One of the expected benefits of the move to cloud didnt exactly turn out as planned. The council thought that moving to a modern, user-friendly interface as opposed to the dated forms seen in E-Business Suite would be heartily welcomed by users.

We didnt get that message initially. It was, Ive now got to do three or four clicks, where I used to just be able to type all my data. For areas where it was more input based, that was something that was voiced in the early stages. That was something we probably should have been highlighting earlier.

It came across that they were getting a new system, but it was slower. It wasnt the system was performing slower than the previous E-Business Suite, but it gave the impression that it was taking them longer to do because its new and different.

While the council hasnt carried out a user satisfaction survey on the cloud applications to check if theyre now fully accepted, Fitzpatrick noted that there is now less feedback on them, which can only be a good thing with the HR side due to go live in summer.

Read the rest here:
Northumberlands move to Oracle cloud apps hands control over from IT to end users - Diginomica

Read More..

Ankr Partners With LTO Network; Announces Node Hosting and Campaign – CryptoNewsZ

Ankr, a leading name in the cloud computing industry, has announced its partnership with LTO Network following which it will channelize the LTO Network public node hosting on its distributed cloud using a highly efficient UI. The strategically important news was circulated in the crypto space through the release of an official report on February 27, 2020, by the Ankr network. Ankr has designed a simple and revolutionary application that can be employed by the community members to set up their public node in a few quick steps.

The auspicious event is being welcomed by the Ankr community in a big way as it will launch a super promo campaign for its global client base. As part of the campaign, the customers will get a golden opportunity to enjoy access to a month-long free hosting along with a fair chance to earn lucrative rewards from a shared pool of 100,000 LTO. Ankr will also give a cashback on the first month of hosting for LTO nodes.

It is really great to be working with our long-time friends at LTO Network! They have a great node hosting community, which have been very supportive helping out with testing and UX optimization. We look forward to adding more nodes to the LTO Network and to more future collaborations! stated Ryan Fang, COO of Ankr.

LTO Network boasts of a hybrid architecture in which a portion of it is a permissionless public chain where the LTO token utility and the economic model are channelized. Unlike other public blockchain networks, LTO supports a variation of Proof of Stake and is also non-inflationary, which makes it quite popular among masses. On the other hand, Ankr aims to streamline the process of node hosting by eliminating the barriers for community members that occur during node hosting. The users earn staking rewards for their favorite projects.

One does not require technical knowledge for setting up an LTO public node on its cloud solution. It takes less than five minutes to deploy the LTO node on the network. One also need not struggle with documentation or pricing set up at a public cloud provider.

See original here:
Ankr Partners With LTO Network; Announces Node Hosting and Campaign - CryptoNewsZ

Read More..

FastComet is Now Powered by AMD EPYC to Dramatically Improve Reliability, Speed, and Security of Its High-Performance Servers – HostReview.com

14:32:10 - 28 February 2020

San Francisco, CA, February 28, 2020 --(PR.com)-- The move follows the recent upgrade of their latest fully-managed Dedicated server line, reinvented with the AMD EPYC processors, including easy scalability, load balancing, and rapid deployment provisioning, available to all their clients. Dedicated CPU provides a powerful infrastructure solution for CPU-intensive applications such as video encoding, machine learning and data analytics processing.

The AMD EPYC processor has been tested extensively to gradually decrease latency, as well as the noisy neighbors impact. The Cores on the hypervisor get exclusive Dedicated vCPU threads, and thus servers dont compete for these resources. Additionally, AMD EPYC SoC's architecture offers extensive I/O with directly attached solid-state drives (SSDs). This aspect of performance is considered crucial for both users and the web host. As for security the AMD EPYC server processor enables encryption for each virtual machine and hypervisor, which helps in safeguarding the privacy and integrity of the application. The AMD Secure Root-of-Trust technology permits the booting of cryptographically signed software only.

After upgrading the Dedicated server plans just four months ago and seeing both speed gains of up to 200% and insanely favorable feedback from users, the next steps were clear. This enables FastComet's already highly optimized platform to handle today's most demanding websites and achieve speeds unrivaled in the industry. This platform refresh also marks the end of Intel on SSD Cloud at FastComet.

Dimitar Petkov, CTO of FastComet stated, "Moving to the AMD EPYC environment has taken our already fast platform to a whole other level and it is undoubtedly the most exciting and impactful change we've had to our platform, especially in regards to stability, security, and speed. We are seeing massive performance gains across the board, which gives our clients the competitive edge they need to succeed."

Finding ways to boost its users' website performance is a worthy pursuit, and FastComet continues to eagerly update its hosting platform to do exactly that.

About FastComet

FastComet Inc. is a full-service web hosting provider located in San Francisco, California focusing on server stability, excellent customer service and ease in web hosting. They continue to impress current and potential clients with speedy replies and exceptional support.

FastComet Inc.Elena Tileva855-818-9717https://www.fastcomet.com

Follow this link:
FastComet is Now Powered by AMD EPYC to Dramatically Improve Reliability, Speed, and Security of Its High-Performance Servers - HostReview.com

Read More..