Page 2,176«..1020..2,1752,1762,1772,178..2,1902,200..»

The factors behind the shift to cloud-native banking – IBS Intelligence

The factors behind the shift to cloud-native banking

Across the globe, the pandemic massively accelerated the shift towards digitalisation across all sectors. Banks are no exception. The migration of banks IT systems onto cloud-native platforms promises to rapidly transform customer experience delivery, business continuity, operational efficiencies and resilience.

by Jerry Mulle, UK Managing Director, Ohpen

However, at what point do the benefits outweigh the status quo and what are the motivations behind this pivotal transition in the industry? Legacy banking IT systems are increasingly unattractive to financial institutions in the modern world, compared with benefits offered by cloud-native banking, and are making digitalisation more appealing to them. Institutions are looking to evolve and modernise their services to deliver greater customer experiences. Whats more, implementing these new cloud systems can now be done faster, in a modular way and with minimal disruption.

Some financial institutions are still working with outdated legacy systems, relying on slow, bulky on-site local servers and even excel datasheets in some cases to run their processes. These institutions are now realising that they are losing out in doing so. The cost of maintaining such systems or enhancing them to meet new regulations can be immense. Decommissioning old IT systems and switching to a cloud-native platform can enable significant cost reductions some of our clients, for example, have experienced cost reductions of up to 40% by doing so. Data, server storage and performance power suddenly become on-demand which enables the ability to scale up and down as needed.

Running legacy systems also has another long-term disadvantage: a larger carbon footprint. The pressure on financial institutions to move towards more sustainable models hasnt increased from society and protests alone, but also from their own internal stakeholders. Whats more, with Europes top 25 banks still failing to meet their sustainability pledges, according to research by ShareAction, its clearly more important than ever for financial institutions to take tangible steps to reduce their environmental impact. Cloud-native banking can play a key role in achieving this.

Institutions can reduce the carbon emissions emitted by their systems by 80% when they switch to cloud-based IT alternatives, according to AWS, moving them further towards meeting their net-zero targets. Whats more, basing systems on the cloud replaces the use of heavily airconditioned server rooms for more efficient software applications and direct integrations with third parties, reducing unnecessary waste.

The reasons behind large financial institutions incumbency often comes down to the legacy systems they have in place. Sometimes dating back to the early 1990s, these bulky systems greatly reduce banks flexibility and capacity for innovation. Deeply ingrained into their overall strategy and ways of working, institutions often fear potential technical issues caused by replacing such systems with cloud alternatives. However, the transformation process is becoming increasingly less disruptive to everyday operations delivering almost 100% system uptime.

Cloud systems also open doors to significantly more flexibility when it comes to creating new products and offerings. Cloud-native systems are based on an API first strategy allowing institutions to curate their own partner ecosystem as well as inherit best of breed integrations as part of the solution. As a result, banks are empowered with endless levers and combinations to create new propositions.

In addition to this, banking on cloud-native platforms is more accommodative to emerging AI capabilities, which empower banks to increase the efficiency and tailoring of the services they offer to their customers. For example, in areas such as mortgages and loans. Documents such as IDs and payslips, which are considered unstructured data, can be interpreted using AI, while connections into other data outlets like credit rating agencies can enrich application information. This ability to organise unstructured data means that we are nearing the times of one-click mortgages, improving the customer experience like never before.

Cloud-native systems therefore form an appealing prospect for large incumbents: not only do they provide a disruption-free entry point to use more efficient technology, but also offer an enhanced ability to adapt to the unpredictable ways in which financial technology will evolve. Cloud technologies will allow institutions to cement their place in the market by empowering them to tackle unknown challenges in the future challenges that legacy systems will struggle to solve quickly while simultaneously putting the customers needs first.

The solutions that cloud banking offers have both potential and clout, enabling banks to cut costs and empowering them to reduce their energy consumption, deploy AI in more efficient ways and prepare for future technologies. For customers, this means that innovative developments in financial services are becoming more directly available for their use. Customers will benefit from instant services, such as loans and mortgages that are automatically tailored to their personal requirements, all powered by AI. As a result, these elements compelling banks to move towards cloud-native systems, and captivating their customers, are set to keep unleashing innovation across the wider financial services landscape at speed.

Link:
The factors behind the shift to cloud-native banking - IBS Intelligence

Read More..

Intel beefs up 500-acre mega factory to help put AMD and others to the sword – TechRadar

Intel has celebrated the grand opening of a major $3 billion extension to its D1X factory in Oregon, USA, used for the development and manufacturing of advanced new processors and chip technologies.

As part of the expansion, the 500-acre campus has been renamed Gordon Moore Park, after the man who in 1965 predicted that the number of transistors on a chip would double every year, and the cost per unit halve.

In addition to increasing Intels manufacturing capacity, the extension will play a pivotal role in the companys research and development (R&D) activity, with the aim of propelling Moores Law long into the future.

In early 2021, Intel made public a reworking of its integrated device manufacturing strategy, which the company called IDM 2.0. The broad objective is to position Intel at the bleeding edge of chip design and manufacturing during a period of unprecedented demand.

The expansion of DX1 will afford Intel an additional 270,000 square feet of clean room space to help develop next-generation process nodes, transistor architectures and packaging technologies, which the company says will provide the foundation for new chips for personal and business computers, 5G networks, cloud servers and more.

"Since its founding, Intel has been devoted to relentlessly advancing Moores Law. This new factory space will bolster our ability to deliver the accelerated process roadmap required to support our bold IDM 2.0 strategy, company CEO Pat Gelsinger said at the ribbon-cutting ceremony.

Oregon is the longtime heart of our global semiconductor R&D, and I can think of no better way to honor Gordon Moores legacy than by bestowing his name on this campus, which, like him, has had such a tremendous role in advancing our industry.

The upgrade to the Oregon campus is one of a number of recent multi-billion-dollar investments designed to boost Intels manufacturing capacity and pace of innovation.

In January, the company revealed it would splash $20 billion on a state-of-the-art manufacturing campus in Ohio, USA. This 1,000 acre mega-site will house up to eight separate fabs, which would make it one of the largest facilities in the world.

Last month, meanwhile, Intel announced plans to invest tens of billions into a litany semiconductor manufacturing projects across Europe, the largest of which will see 17 billion funelled towards a new site in Germany that will produce top-tier chips for both Intel itself and customers of Intel Foundry Services (IFS).

The company also recently acquired Tower Semiconductor for roughly $5.4 billion, a move designed to broaden the IFS portfolio with process technologies for specialist but high-growth markets such as automotive, medical and aerospace.

Visit link:
Intel beefs up 500-acre mega factory to help put AMD and others to the sword - TechRadar

Read More..

The Channel Angle: Determining The Value And ROI Of Cloud Automation – CRN

[Editors note: The Channel Angle is a monthly CRN guest column written by a rotating group of solution provider executives that focuses on the triumphs and challenges that solution providers face. If you are a solution provider executive interested in contributing, please contact managing editor David Harris.]

The tech industry has grown software-centric and this new world requires a new way of thinking as more workloads are moved into the cloud. A cloud footprint can be configured through a providers console but the problem with that is anything manual has the potential for human error.

There are many compelling reasons why organizations should look at cloud automation. In the new world, companies can set up a cloud footprint using automation to reduce human error, make it quick and easy to create new environments, document the setup in the case of employee turnover, and do more with fewer engineers.

The process starts by creating automation to spin up servers and all the different components needed to run an application. Infrastructure automation, or software scripts, uses a configuration to create an environment. Once that automation is perfected it doesnt change from environment to environment. Its one and done.

After the automation is written, setting up new environments enables IT to just push a button. That avoids the potential for human error. It also makes it easy to create additional environments quickly according to business needs.

A performance testing environment is one example of an additional environment that can be set up. Some application projects are large enough that they need multiple user testing environments, and automation makes any additional environments easy to create.

Pain points with automating cloud deployments

Automation is a paradigm shift for management as well as administrators, who are used to doing things manually. It requires skillsets organizations likely dont have and need to acquire. Additionally, the on-premises organization structure doesnt work in the cloud for most organizations, especially when doing automation. Conversely, the people with automation skills often dont have specialized knowledge in networking, security, DNS, and active directory, among other areas.

That said, momentum continues to grow for organizations to have cloud footprints. For example, the CTO of a large financial services firm client has flatly stated she does not want to make any additional investments in data centers or physical hardware. All new application workloads go to the cloud because the CTO wants to get out of the physical data center business and avoid the need to own properties. Cloud automation is the only way for her to achieve her goal; manual administration would not scale appropriately.

However, some organizations perceive that moving to the cloud will give them cost benefits, which is illusory. Rather, the benefit is about speed to market and enabling a business to become more agile and competitive. Automation is a prerequisite for achieving that speed. Otherwise, if youre just porting current applications into the cloud you are moving the problem to someone elses data center.

When corporations get into the cloud, they all make the same mistake of assuming its a tech change and that business will be conducted the same way, but with new cloud technology. The problem with that mindset is the cloud is all software-based and when workloads are automated, the old departmental or team silos dont work. A silo might be a network administrator who is used to dealing with LANs and does not have innate cloud skills.

On-premises technology skills do not automatically translate to the cloud without upskilling. So what happens is, DevOps teams that have cloud and automation skills end up having to annex networking and other additional skills. This can leave on-premises administrators feeling their jobs are threatened and that theyre left out of the movement unless they are willing to gain new skills.

Cloud automation is not about eliminating IT roles but changing them. Some human still has to figure out what rules make sense for their company and then someone has to automate them and push them out so theyre effective. In the cloud, security should be automated as well.

The value of policy-based management

Ultimately, organizations migrate to policy-based management, which establishes automated guardrails that will prevent someone from doing things they shouldnt.

That reduces the need to rely on people having to manually audit systems. The chief enemy of moving to the cloud is rookie mistakes. There have been instances where app developers have unwittingly opened databases so they can be accessed through the internet, simply because they dont know any better. Its a security breach if they do that because it increases the chance your data gets hacked and is seen by people who shouldnt see it.

A policy-based management plan for automating cloud applications and workloads will prevent rookie mistakes. Most cloud vendors use APIs, so automating them is relatively straightforward.

The complexity of automating clouds will depend on the organizations level of cloud maturity. For customers just getting started in their migration, make sure to automate everything at the outset.Organizations should ensure they set up procedures to automate their cloud footprint so they dont need to implement anything manually. That gets a little bit tricky when a customer that started out with manual processes wants to adopt automation. Then its more work involved to make that happen.

There are tools to do that, but the process takes longer. But organizations that have automated from manual processes are very happy in the long run.

A provider can teach organizations with a manual presence how to do automation as code. That saves a lot of time, effort, and headaches.

The benefits

Organizations that adopt cloud and automation will see cost savings in staff and dynamic scaling, and the ability to grow and shrink a cloud footprint based on demand. They will also see significant productivity gains.

The hybrid cloud presence even for large corporations can be managed with a relatively small staff when cloud automation is adopted. Compare that to the old world with an equivalent footprint, where everything needs to be done manually, and youd be talking about many more people.

Mark McCoy is a managing partner and enterprise hybrid cloud architect at Asperitas Consulting, based in Chicago.

Visit link:
The Channel Angle: Determining The Value And ROI Of Cloud Automation - CRN

Read More..

StorPool Named One of Europe’s Fastest-Growing Companies – Business Wire

SOFIA, Bulgaria--(BUSINESS WIRE)--StorPool Storage was listed as the 413th fastest-growing company in Europe as part of Financial Times in-depth special report focused on organizations that achieved the highest compound annual growth rate in revenue between 2017 and 2020.

Tens of thousands of companies from 33 countries were invited to participate in the project. The ranking was compiled with research company Statista across a broad range of sectors. StorPool achieved a CAGR during the queried timeframe of 69.29 percent. This was nearly double the minimum average growth rate 36.5 percent required to be included in this years ranking.

The companies that made the final cut were sufficiently resilient and in some cases, lucky to survive a collapse in demand caused by coronavirus restrictions, trade frictions due to Brexit, and a long-running global supply chain squeeze, read the report.

StorPool accelerates the world by storing data more productively and helping businesses streamline their operations. StorPool storage systems are ideal for storing and managing the data of demanding primary workloads - databases, web servers, virtual desktops, real-time analytics solutions, and other mission-critical software. Under the hood, the primary storage platform provides thin-provisioned volumes to the workloads and applications running in on-premise clouds. The native multi-site, multi-cluster and BC/DR capabilities supercharge hybrid- and multi-cloud efforts at scale.

The keys to our success involve a superior product, dedicated team and partners who help us supply leading technology solutions for global companies, said Boyan Ivanov, CEO at StorPool Storage. Whether locally, nationally or across continents, StorPool delivers the ideal foundation for large-scale clouds running mission-critical workloads. We are pleased to have our hard work recognized by Financial Times as one of the fastest growing companies in Europe. We believe our continued success will enable us to earn this recognition for years to come.

About StorPool Storage

StorPool Storage is a primary storage platform designed for large-scale cloud infrastructure. It is the easiest way to convert sets of standard servers into primary or secondary storage systems. The StorPool team has experience working with various clients Managed Service Providers, Hosting Service Providers, Cloud Service Providers, enterprises and SaaS vendors. StorPool Storage comes as a software, plus a fully managed data storage service that transforms standard hardware into fast, highly available and scalable storage systems.

Link:
StorPool Named One of Europe's Fastest-Growing Companies - Business Wire

Read More..

VLogic Systems, Inc. Named One of the Most Prominent IWMS Providers – PR Web

We are thrilled at being named one of the most prominent IWMS vendors on Verdantixs Green Quadrant 2022.

CONCORD, Mass. (PRWEB) April 12, 2022

VLogic Systems, Inc., a leading Integrated Workspace Management Software (IWMS) SaaS provider, today announced that VLogic was listed as one of twelve, most prominent vendors in the Verdantix Green Quadrant Integrated Workplace Management Systems 2022. Verdantix is a research and advisory firm with global expertise in digital strategies for Environment, Health & Safety, ESG & Sustainability, Operational Excellence, and Smart Buildings. The firm releases its IWMS smart buildings green quadrant research every two years.

We are thrilled at being named one of the most prominent IWMS vendors on Verdantixs Green Quadrant 2022. We look forward to releasing even more smart building innovations this year, including upgraded versions of our real-time occupancy tracking software tools, and additional releases of our space scheduling software for tackling the demands of todays hybrid workplace, including office hoteling and hot-desking features, said VLogics president, George T. Koshy.

One of the VLogic innovations reviewed by Verdantix for its 2022 research report is VLogicFM Tracking, a real-time occupancy tracking solution that uses IOT-enabled (Internet-of-Things) sensors to securely send encrypted occupancy data to VLogicFMs Microsoft Azure cloud servers, via unique, onsite cellular gateways. VLogics cellular solution is practically plug and play because it is deployed entirely outside the customers local network. In practical terms, this translates into a better security footprint, dramatically faster onboarding time, less cost, and faster time to activation.

Customers deploy VLogicFM Tracking to optimize current space usage, load balance departmental sharing of common rooms (e.g., patient exam rooms and conference rooms), and to make data-driven new construction planning decisions. Customers also report that using this objective, sensor-based solution often defuses ongoing tensions between building occupantswho subjectively clamor for more spaceand budget-constrained building operations managers.

Future updates to VLogicFM Scheduling will include sensor-based occupancy data to improve hot-desking / office hoteling bookings for hybrid workforces. For example, the system could alert scheduling managers that a booked room is still vacant after a number of minutes into the booked time slot.

The possibilities created by smart building technologies are exciting and VLogic is fully committed to enhancing these offerings.

About VLogic Systems, Inc.

VLogic Systems, Inc., an integrated workspace management software (IWMS) pioneer, provides cloud-based SaaS solutions that maximize the value of enterprise physical facilities, assets, and real estate portfoliosby dramatically simplifying workspace management, using an intuitive, spatially-centered model that reduces management time and cost. VLogic Systems Inc. is headquartered in Concord, MA. For more information, go to http://www.vlogicsystems.com/

VLogic Systems, Inc.pr@vlogicsystems.com978-341-9000 x407LinkedIn: https://www.linkedin.com/company/vlogicsystems/

Share article on social media or email:

Read the original post:
VLogic Systems, Inc. Named One of the Most Prominent IWMS Providers - PR Web

Read More..

At last, Atlassian sees an end to its outage … in two weeks – The Register

The Atlassian outage that began on April 5 is likely to last a bit longer for the several hundred customers affected.

In a statement emailed to The Register, a company spokesperson said the reconstruction effort could last another two weeks.

The company's spokesperson explained its engineers ran a script to delete legacy data as part of a scheduled maintenance for unidentified cloud products. But the script went above and beyond its official remit by trashing everything.

"This data was from a deprecated service that had been moved into the core datastore of our products," Atlassian's spokesperson said. "Instead of deleting the legacy data, the script erroneously deleted sites, and all associated products for those sites including connected products, users, and third-party applications."

Atlassian, which has been trying to repair the damage of its errant script, on Friday said it expected "most site recoveries to occur with minimal or no data loss." And so far, though data has been deleted, Atlassian has been able to revive it.

"We maintain extensive backup and recovery systems, and there has been no data loss for customers that have been restored to date," Atlassian's spokesperson said, and stressed that this was not the consequence of a cyberattack and that no authorized access to customer data has occurred.

Jira Software, Jira Work Management, Jira Service Management and Confluence continue to show problems on the Atlassian status page, as do Opsgenie and Atlassian Access. Jira provides software issue tracking while Confluence offers a web-based corporate wiki. Opsgenie is an alert service and Atlassian Access is an identity and access management service.

Onsite JIRA installations have not been affected. Self-managed servers are on the way out, however: Back in October, 2020, Atlassian announced the discontinuation of its server products it stopped selling new licenses on February 2, 2021 and plans to end support for its server products on February 2, 2024. The reason, the company explained last year, is that the cloud is the future.

"We know this outage is unacceptable and we are fully committed to resolving this," Atlassian's spokesperson said. "Our global engineering teams are working around the clock to achieve full and safe restoration for our approximately 400 impacted customers and they are continuing to make progress on this incident."

As we reported earlier on Monday, the software biz says it has restored functionality for more than 35 per cent of those affected by the service outage.

Atlassian said that the company is doing everything it can to restore service as fast as possible but until today it had been unable to provide a likely recovery date due to the complexity of the rebuilding process.

"While we are beginning to bring some customers back online, we estimate the rebuilding effort to last for up to two more weeks," the company said.

That's quite a bit longer than the "<6 hours" recovery time promised by the company for Tier 1 services like Jira and Confluence.

"We know this is not the news our customers are hoping for, and we apologize for the length and severity of this incident. We dont take this issue lightly and are taking steps to prevent future reoccurrence."

See the article here:
At last, Atlassian sees an end to its outage ... in two weeks - The Register

Read More..

The Falkirk experience of building a hybrid cloud – UKAuthority.com

Image source: istock.com/Blue Planet Studio

The council has shown how it is possible to develop a model that optimises public and private clouds while maintaining high levels of security, writes Andrew Puddephatt, director UK public sector of Nutanix

Cloud first might be a long term ambition for public authorities, but the demands of migrating applications, maintaining security and managing costs mean that for now most have their sights set on optimising the use of a hybrid cloud model in their digital infrastructure.

Falkirk Council is among those that have made significant progress, taking the cloud as appropriate approach, with a combination of private and public clouds and its own data centre to provide the most suitable environment for each of its business applications.

Cloud first is a journey, says its team leader for network, infrastructure and cyber security, Murat Dilek. Well get there one day, but not within the next few years.

He says that, for the foreseeable future, Falkirk is focused on a hybrid cloud approach in which the Nutanix platform is playing an important role.

In 2019 the council began by looking at its applications and saw it could not even think about moving some of the latter to the cloud. But it already had Office 365 running in the Microsoft public cloud, and some virtualisation within its infrastructure with which it could work.

This prompted a focus on a hybrid model in which each application was assessed to see where best it works, taking into account the availability and costs of moving it to software-as-a-service or running it on a public cloud such as Azure or AWS.

It led to some legacy and Oracle based applications remaining within the councils on-premise infrastructure, while some others including its HR, housing, social care and disaster recovery are in public clouds.

As the third element, it has developed its private cloud on the Nutanix platform, utilising a virtual desktop infrastructure with 70 virtual servers and a 100 physical to virtual conversions, along with the appropriate back-up functions.

This has provided the advantages of private cloud: it is scalable on demand; brings down operational expense by reducing the amount of on-premise equipment to support; helps to automate workloads for streamlined processes and increased productivity; reduces the management overhead as the system can be monitored and maintained centrally; provides more options for business continuity through a fast recovery of applications and data; and reduces the floor space and carbon emissions of the on-premise data centre.

The platform has also provided the capability to extend more applications from the private to public cloud, or vice versa, when it is helps to optimise operations.

Along with this has been a focus on high security to underpin hybrid working during the pandemic, and a the outlook for a hybrid workplace over the long term. The council aimed to develop a zero trust access (ZTA) model, in which home workers had to enter the corporate network through a firewall and load balancer, then go into the virtual desktop infrastructure to access applications hosted in the data centre, and to move back out to those in public clouds.

ZTA is reinforced through a mutli-device management system that includes anti-virus endpoints, patch management, device restriction and encryption, and the need for complex passwords or phrases to permit entry. In addition, access to on-premise applications and resources goes through an encrypted tunnel, and the route back out to the internet involves local internet breakouts and web gateways with web filtering SSL inspection.

The results are that the system provides users with internet access with full inline inspection to block the bad and protect the good, and connects them to applications for which they are authorised, located in the data centre, public or private cloud. It does this while providing the same experience as if they were on premise.

This has created a situation in which Dilek says the internet has effectively become the new corporate network for Falkirk, with lower costs, increased reliability of applications, lower risks, scalability and the foundation for a more agile approach in the councils digital operations.

This is relevant to a couple of key trends for the public sector. One is in a gradual move away from using on-premise data centres. Research for the Nutanix Enterprise Cloud Index has shown that, while 25% of respondents from the sector are still deploying IT in traditional data centres, less than 5% expect to be doing so in three years time.

The other is that 64% expected to be operating in a multicloud environment within three years, and 83% agreed that a hybrid combining public and private clouds would be the ideal model. In this respect, the UK public sector is ahead of the norm.

But it comes with challenges, notably around managing the costs, security and integrating data across clouds. And the interoperability of multiple cloud environments and the ability to move applications from one to another is crucial.

The latter is often difficult to achieve, but the experience of Falkirk Council with the Nutanix platform shows it is possible. There is scope to build a hybrid cloud that meets all the demands and equips an organisation for a future of hybrid working and adoption of new applications. It is something all public sector bodies should explore.

Excerpt from:
The Falkirk experience of building a hybrid cloud - UKAuthority.com

Read More..

Introduction to Data Mining – University of Minnesota

Avoiding False Discoveries: A completely new addition in the second edition is a chapter on how to avoid false discoveries and produce valid results, which is novel among other contemporary textbooks on data mining. It supplements the discussions in the other chapters with a discussion of the statistical concepts (statistical significance, p-values, false discovery rate, permutation testing, etc.) relevant to avoiding spurious results, and then illustrates these concepts in the context of data mining techniques. This chapter addresses the increasing concern over the validity and reproducibility of results obtained from data analysis. The addition of this chapter is a recognition of the importance of this topic and an acknowledgment that a deeper understanding of this area is needed for those analyzing data.

Classification: Some of the most significant improvements in the text have been in the two chapters on classification. The introductory chapter uses the decision tree classifier for illustration, but the discussion on many topicsthose that apply across all classification approacheshas been greatly expanded and clarified, including topics such as overfitting, underfitting, the impact of training size, model complexity, model selection, and common pitfalls in model evaluation. Almost every section of the advanced classification chapter has been significantly updated. The material on Bayesian networks, support vector machines, and artificial neural networks has been significantly expanded. We have added a separate section on deep networks to address the current developments in this area. The discussion of evaluation, which occurs in the section on imbalanced classes, has also been updated and improved.

Anomaly Detection: Anomaly detection has been greatly revised and expanded. Existing approachesstatistical, nearest neighbor/density-based, and clustering basedhave been retained and updated, while new approaches have been added: reconstruction-based, one-class classification, and information-theoretic. The reconstruction-based approach is illustrated using autoencoder networks that are part of the deep learning paradigm.

Association Analysis: The changes in association analysis are more localized. We have completely reworked the section on the evaluation of association patterns (introductory chapter), as well as the sections on sequence and graph mining (advanced chapter).

Clustering: Changes to cluster analysis are also localized. The introductory chapter added the K-means initialization technique and an updated discussion of cluster evaluation. The advanced clustering chapter adds a new section on spectral graph clustering.

Data: The data chapter has been updated to include discussions of mutual information and kernel-based techniques.

Exploring Data: The data exploration chapter has been removed from the print edition of the book, but is available on the web.

Visit link:

Introduction to Data Mining - University of Minnesota

Read More..

Advanced analytics can help achieve fuel optimization in open-pit mining – McKinsey

Metals mining contributes 3 to 4 percent of global CO2 emissions. For open-pit mining, about 45 percent of carbon emissions are typically Scope 1, of which about 35 percent comes from fuel consumed in hauling.

Fuel optimization achieved by harnessing existing data and machine learning can reduce carbon emissions immediately while alternative technologies to diesel for off-highway trucks are developed and scaled.

A proven machine-learning platform enables discovery of correlations and highlights drivers of fuel consumption based on a truck fleets past performance by connecting fleet management, enterprise asset management, machine IoT, and other operational data (for example, tire pressure, road layout and quality sensors, and fuel quality). In addition, creation of a digital twin makes it possible to solve for fuel efficiency while maintaining productivity and integrating with both internal and external data sets.

As ore grades decrease and pits become deeper, haulingand its associated costsis of greater importance to maintain mine operating expenditures. McKinseys experience shows that leveraging proven machine-learning-based solutions, along with a change in management strategy, can improve hauling-fuel efficiency relatively quickly and with limited investment.

Download the full commentaryfor more insight (PDF76.3 KB).

Continued here:

Advanced analytics can help achieve fuel optimization in open-pit mining - McKinsey

Read More..

Data from friends and strangers show where you are – Futurity: Research News

Share this Article

You are free to share this article under the Attribution 4.0 International license.

Turning off your data tracking doesnt mean youre untraceable, a new study warns.

Data about our habits and movements are constantly collected via mobile phone apps, fitness trackers, credit card logs, websites visited, and other means. But even with it off, data collected from acquaintances and even strangers can predict your location.

Switching off your location data is not going to entirely help, says Gourab Ghoshal, an associate professor of physics, mathematics, and computer science at the University of Rochester.

Ghoshal and colleagues applied techniques from information theory and network science to find out just how far-reaching a persons data might be. The researchers discovered that even if individual users turned off data tracking and didnt share their own information, their mobility patterns could still be predicted with surprising accuracy based on data collected from their acquaintances.

Worse, says Ghoshal, almost as much latent information can be extracted from perfect strangers that the individual tends to co-locate with.

The researchers analyzed four datasets: three location-based social network datasets composed of millions of check-ins on apps such as Brightkite, Facebook, and Foursquare, and one call-data record containing more than 22 million calls by nearly 36,000 anonymous users.

They developed a colocation network to distinguish between the mobility patterns of two sets of people:

By applying information theory and measures of entropythe degree of randomness or structure in a sequence of location visitsthe researchers learned that the movement patterns of people who are socially tied to an individual contain up to 95% of the information needed to predict that individuals mobility patterns.

However, even more surprisingly, they found that strangers not tied socially to an individual could also provide significant information, predicting up to 85% of an individuals movement.

The ability to predict the locations of individuals or groups can be beneficial in areas such as urban planning and pandemic control, where contact tracing based on mobility patterns is a key tool to stopping the spread of disease. In addition, many consumers appreciate the ability of data mining to offer tailored recommendations for restaurants, TV shows, and advertisements.

However, Ghoshal says, data mining is a slippery slope, especially because, as the research shows, individuals sharing data via mobile apps may be unwittingly providing information about others.

Were offering a cautionary tale that people should be aware of how far-reaching their data can be, he says. This research has a lot of implications for surveillance and privacy issues, especially with the rise of authoritarian impulses. We cant just tell people to switch off their phones or go off the grid. We need to have dialogues to put in place laws and guidelines that regulate how people collecting your data use it.

Additional coauthors of the paper are from the University of Exeter, the Federal University of Rio de Janeiro, Northeastern University, and the University of Vermont.

Source: University of Rochester

Read more:

Data from friends and strangers show where you are - Futurity: Research News

Read More..