Page 1,849«..1020..1,8481,8491,8501,851..1,8601,870..»

Taking the Complexity Out of the Cloud Journey – CIO Insight

According to a study by Wakefield Research, 92 percent of organizations are either in the midst of app modernization or are planning to. Unfortunately, many of these modernization projects run into trouble. As they progress, the projects grow more complex, more expensive, and riskier.

Around 80 percent of software developers and architecture engineers admit to a failure in an app modernization effort. Three out of four of survey respondents complained about costmany reported that typical application modernization project costs nearly $1.5 million. Another 58 percent said such projects usually take around 16 months, with 27 percent saying they can run for two or more years. Developers, too, felt that a lack of integration tools was holding them back.

These reports arent discouraging organizations from migrating more applications to the cloud. Instead, app modernization teams are seeking tools that automate and simplify the cloud journey. Such tools must be able to connect and integrate applications between clouds, as well as between the cloud and those applications that must remain on-premises. This is a lot harder than it sounds, as many in IT have discovered.

Many organizations hear about how Google Cloud, Amazon, Facebook, and other hyperscalers have a cloud-only architecture. They admire the flexibility this offers and become envious of the capabilities, performance, and cost-efficiency these providers build into their data centers and application portfolios. Businesses want that for themselves, and they often rush headlong into the cloud in their digital transformation efforts.

But they are forgetting one major point that explains why the hyperscalers make it look easy: starting with cloud-native applications. Hyperscalers had the luxury of being able to architect everything for the cloud, so they werent bogged down in decades-old legacy applications.

Most established organizations are coming from a very different position. Many of their apps were designed for on-premises deployment. Even when they develop newer apps for the cloud, they often discover that these apps still need to interact with legacy on-premises systems.

For example, banking systems that are cloud-based and customer-facing typically suffer from dependencies that require data to be passed through a legacy system or at least interact with that system for verification purposes. There are also a great many regulations that require data to remain on-premises, not leave the country, guarantee privacy, and meet other standards.

Financial services requirements can get particularly complex: regulations about how and where money can be moved, the many taxation jurisdictions and responsibilities, data sovereignty rules, etc. These factors tend to mire down application modernization efforts.

All of this makes the move to the cloud far more challenging. When you then factor in the complexity within modern IT itselfKubernetes clusters, virtualization, software-defined computing, and other factorsits no wonder so many digital transformation projects and cloud enablement projects are stalling.

Also read: Top Cloud Computing Companies

Fortunately, several providers are coming to bat for those wanting to move to the cloud. Companies like vFunction and Ori are developing tools to eliminate this complexity and automate the cloud journey. Ori, for example, offers the Ori Global Cloud as a service. It functions as a kind of middleware or automated orchestration/integration platform that promises to take apps to the cloud at Internet speed.

It is now possible to deploy a single app to the cloud in minutes and as many as 30 common apps to the cloud within a day, said Rick Taylor, CTO at Ori. The Ori Global Cloud reduces the need to develop in-house technology expertise by abstracting away any complexity and eliminating time-consuming manual plumbing. This allows staff to focus on higher-level IT functions.

The software achieves this efficiency via a combination of automation and intelligent orchestration. AI built into Ori Global Cloud administers application compute destinations based on criteria such as availability, operational cost, location, and performance requirements.

Plus, the supporting automation features take the manual labor out of the move to the cloud. Ori automatically manages the underlying networking, security, and installation processes for all deployments. IT only has lay out a few application requirements and the platform takes care of the rest.

If such platforms deliver what they promise, this could bring a new lease of life to the world of digital transformation and cloud migration.

Read next: Creating a Cloud Strategy: Tips for Success

Read the original post:
Taking the Complexity Out of the Cloud Journey - CIO Insight

Read More..

Army IT leader pledges quicker cloud uptake in ‘year of action’ – C4ISRNet

AUGUSTA, Ga. The U.S. Army will make swift, significant strides in cloud migration and utilization in the coming 12 months, according to the services top uniformed information technology official.

Dubbing the next year as a period of action and acceleration, Lt. Gen. John Morrison, deputy chief of staff, G-6, on Aug. 17 pledged much more rapid movement to the cloud now that the groundwork has been laid.

The Army considers cloud migration and widespread, secure use foundational to the broader modernization of its networks, computers and collaboration capabilities. Mastering cloud computing will also help realize artificial intelligence and machine learning for cyber warfare, according to the 2020 Army Cloud Plan.

We are putting the requisite capabilities into the hands of our operational formation so they can understand the applications that now need to move to the cloud, and we are aligning the requisite combat power to assist in that migration, Morrison told reporters at the AFCEA TechNet Augusta conference. It is going to be much faster.

The Army requested $16.6 billion in cyber and IT funding for fiscal 2023, which starts Oct. 1, or more than 9% of the services $178 billion budget blueprint. Hundreds of millions would be invested in cloud, officials said.

Morrison and others are coordinating with Army Chief Information Officer Raj Iyer to audit data centers that the service eventually wants to shutter, to better understand what is out there and what needs to be relocated. Such analysis will speed cloud uptake, according to Morrison.

What we have learned very quickly is its not about the data center, its about the applications and the data thats in the data center, the general said. What is cloud ready? What is not cloud ready? Et cetera. And thats sort of been where weve gotten a little pitchy at times.

Iyer in June described the coming year as an inflection point along the Armys digital transformation journey. The CIO said he expected great progress to be made on cloud initiatives, as well, based on previous advancements in fiscal 2021 and 2022.

We need to make sure that the investments that we have are appropriately aligned to the Armys priorities and to the DoD priorities, quite honestly, Iyer told reporters this year. There are clearly some priorities that we have invested in. All of you know that digital means that we have to adopt at scale cloud, data and AI.

The Armys cloud efforts are tied to the Joint Warfighting Cloud Capability, the Department of Defenses $9 billion follow-up to the failed Joint Enterprise Defense Infrastructure endeavor. The department axed the lucrative JEDI deal, won by Microsoft, in 2021 after years of delays and accusations from Amazon that the Trump administration interfered in the competition.

The JWCC is meant to beef up the departments cloud-computing capabilities by bridging unclassified, secret and top-secret tranches while still reaching the militarys farthest edge.

The Pentagon last year contacted Amazon, Google, Microsoft and Oracle about the JWCC, and earlier this year said proposals remained under review. Awards are expected to be made by the end of December, after an April deadline was deemed premature.

JWCC is still in the throes of moving through the acquisition process, Morrison said. So I would sit there and say were well nested, and the DoD CIO understands everything that were doing.

Colin Demarest is a reporter at C4ISRNET, where he covers military networks, cyber and IT. Colin previously covered the Department of Energy and its National Nuclear Security Administration namely Cold War cleanup and nuclear weapons development for a daily newspaper in South Carolina. Colin is also an award-winning photographer.

Go here to read the rest:
Army IT leader pledges quicker cloud uptake in 'year of action' - C4ISRNet

Read More..

Google’s Andy Murphy: Agencies Should Keep Pace With Innovation Through Cloud – GovCon Wire

Andy Murphy, head of customer engineering for federal civilian agencies at Google, said government agencies looking to meet their modernization objectives, take advantage of new technical capabilities and accomplish their missions should adopt cloud computing technologies.

Cloud providers can often develop and release a new feature or service long before an agency would be able to procure the appropriate infrastructure, develop the software, receive an authority to operate and implement production, Murphy wrote.

Furthermore, this frees an agency to focus more of their time on public interactions and services rather than spending time on the underlying infrastructure, he added.

Murphy discussed how public cloud services could provide agencies unprecedented scale and speed when it comes to ingesting and analyzing petabytes of data in order to generate insights and make data-driven decisions.

He mentioned BigQuery and how this Google Clouds multicloud data warehouse could enable agencies to store and analyze large data volumes without the need to manage the underlying infrastructure.

Murphy also discussed Google Clouds use of open-source tools and interfaces and the companys Kubernetes-based application management platform, Anthos.

Google created Kubernetes to run on whichever on premise or cloud infrastructure our customers prefer and give users an automated, fully-managed, single control plane for orchestrating and operating all of their containerized applications, he added.

See original here:
Google's Andy Murphy: Agencies Should Keep Pace With Innovation Through Cloud - GovCon Wire

Read More..

The climate and clouds conundrum – Protocol

Good day, Protocol Climate friends. Today were talking all about clouds. Well be exploring the best way to cut down on contrails and their sneaky bad climate impact as well as the most and least carbon-intensive places to site data centers. Float along with us!

Contrails. They look so innocuous up there, their puffy little tendrils stretching across the sky. But they have a dark climate secret, albeit one with an easy solution that could have immediate benefits for our overheating world.

Contrails have a surprisingly large impact on the climate. The clouds of ice crystals that form in the wake of a plane are responsible for more than 50% of flights climate impact and up to 2% of total global warming.

Airlines have focused on cutting carbon dioxide, but addressing contrails could be a quick fix. Despite contrails outsized impact on the climate, they have largely flown under the radar (aviation joke) outside academic circles.

Some airlines and tech companies are starting to explore options to kill contrails. Step one comes with measuring how and when they form. There's no good way to account for them yet, but it is progressing, said Sola Zheng, an aviation researcher at the International Council on Clean Transportation.

Ultimately, reducing warming contrails could buy us a little climate breathing room while the world works to cut carbon emissions. Thats because contrails can dissipate in a few hours while carbon dioxide remains in the atmosphere for centuries. So theres no reason not to get to it.

Read more about aviations dirty secret here.

Michelle Ma

Data centers have long been energy hogs, but just how much carbon pollution is tied to their energy use is often an open question. A new report makes it clear, though, which data center hubs are the most and least carbon intensive. The findings point to the challenges holding the sector back from reducing carbon emissions, as well as ways tech companies can mitigate the climate toll of their cloud computing demands.

Renewables are good, but they dont guarantee a clean cloud. The report, released Thursday by cloud management platform Cirrus Nexus, looked at carbon intensity in regions that host clusters of data centers. Carbon intensity is a measure of carbon dioxide emitted per unit of electricity generated.

Managing carbon will be key to keeping the cloud cool. Chris Noble, CEO and co-founder of Cirrus Nexus, said that while theres not a simple answer for companies wondering where to locate their workloads in order to minimize their climate toll, there are some best practices.

Still, if cloud customers do start to ask for more climate-friendly computing, it could have a major influence on the industry and even have a surprising impact on the grid.

Click here to find out about that impact and read more about the report.

Lisa Martine Jenkins

Why on-demand talent could be exactly what companies need right now: If you thought the rise of remote work, independent contractors and contingent workers rose sharply during the pandemic, just wait until the next few months when you see a higher uptick in the on-demand talent economy.

Read more from Upwork

Geothermal is getting a boost: the startup Fervo Energy raised $138 million in its latest funding round, led by DCVC, bringing its total investment up to $177 million in five years. Fervo plans to use the funds to make its plan for lower-cost geothermal power plants a reality.

Atom Power, an eight-year-old startup joining the growing number of companies aiming to improve EV charging in the U.S., got a $100 million investment from Koreas SK.

The carbon management platform Carbon Direct raked in $60 million in an equity investment co-led by Decarbonization Partners and Quantum Energy Partners. The former is a joint venture between BlackRock and Singapores state-owned holding company Temasek.

The software startup Zitara promises to make batteries safer and more profitable via both physics and machine learning, and this week closed a $12 million series A funding round led by Energy Impact Partners.

Worldfavor, a sustainability reporting platform, amassed nearly $10.2 million in its series A funding round, led by the Nordic SEB Private Equity.

French company Koolbooks aims to bring solar-powered refrigerators and freezers to Africa, where erratic power can make traditional refrigeration a challenge. The startup raised $2.5 million in seed funding, led by the Nigeria-based Aruwa Capital Management.

Mantel, a carbon capture startup, received a $2 million investment led by the MIT spin-off The Engine. The company uses molten salts that absorb carbon dioxide in hot environments like boilers and kilns, used in the notoriously hard-to-decarbonize industrial sector.

In funding news beyond the venture capital world, Volkswagens leaders have said the auto giant plans to take a stake in Canadian mining companies in order to guarantee it has sufficient raw materials for batteries for its growing EV business.

California put another nail in the internal combustion engine coffin. The state is set to ban the sale of new gas-powered vehicles by 2035. More than a dozen states could follow its lead, further jumpstarting the EV revolution.

Google Maps could make our lives easier and help the climate. The service already offers lower-emissions driving routes. But it could go even further by giving users the ability to link biking and public transit in one trip.

You can get paid to turn your truck into a battery. Thats what a new smart-charging program from utility Duke Energy and Ford is offering lucky F-150 Lightning drivers who sign up to bolster the grid.

No one line should have all that power. Or maybe it should. Phil Anschutz, the billionaire owner of Coachella music festival and the Los Angeles Kings, wants to build a power line from Wyoming to California to transport clean energy.

The new Cold War could heat up the planet as China and the U.S. suspend climate talks. Here are five things to know about whats next for the worlds two biggest greenhouse gas emitters.

Why on-demand talent could be exactly what companies need right now: The biggest benefit of leveraging on-demand talent is often tapping into the talent and skills that businesses cant find elsewhere. Upworks recent report highlights that 53% of on-demand talent provide skills that are in short supply for many companies, including IT, marketing, computer programming and business consulting.

Read more from Upwork

Thanks for reading! As ever, you can send any and all feedback to climate@protocol.com. See you next week!

View post:
The climate and clouds conundrum - Protocol

Read More..

VMware posts solid earnings and revenue beat as it waits to be acquired by Broadcom – SiliconANGLE News

Virtualization software giant VMware Inc. delivered solid second-quarter results that beat expectations on profit and revenue, in what is likely to be one of its last earnings reports as a public company.

The company reported a net profit for the quarter of $347 million, down slightly from the $411 million profit it recorded in the same quarter last year. Earnings before certain costs such as stock compensation came to $1.64 per share, with revenue rising just over 6% to $3.34 billion.

It was a solid if unspectacular performance, with Wall Street modeling earnings of just $1.57 per share on sales of $3.3 billion.

Not surprisingly, VMwares stock barely moved in extended trading, since the company is on the verge of being acquired by the chipmaker Broadcom Inc. in a blockbuster $61.2 billion deal thats likely to be completed in the coming months. Although the exact date for closing isnt known, Broadcom has said it should get the deal done during its fiscal year 2023, which starts in November.

VMware looks set to be a useful acquisition for Broadcom. Founded in 1998, the company has become synonymous with virtualization software, which enables applications and other computing workloads to be consolidated onto a smaller number of servers. In his way, servers can run multiple applications at once, increasing data center efficiency.

The company is by far and away the most dominant player in the virtualization software space, but many believe that the company could be much more successful than it currently is. VMwares trouble is that its firmly rooted in corporate, on-premises data centers.

With the rise of cloud computing, its value proposition became somewhat uncertain. The company tried and failed to launch its own cloud offerings, and ultimately went on to do deals with public cloud leaders such as Amazon Web Services Inc. However, those efforts havent sparked much growth at VMware.

Broadcom is best known for making computer chips but it also has a growing data center software business and analysts believe thats where VMwares assets will come in handy. Broadcom says the deal will help ensure it becomes a leader in enterprise and cloud computing markets, offering hardware and software for a broad range of customers. For instance, analysts say VMware NSX platform can help Broadcom to strengthen its position in the networking industry.

Given the pending acquisition, VMware executives did not hold a conference call relating to todays earnings results, a common practice for public companies waiting to be acquired. Instead, VMware Chief Executive Raghu Raghuram (pictured) offered a short statement, saying he was pleased with the companys second-quarter performance.

Our momentum continues next week at VMware Explore where we will showcase new innovative offerings while also highlighting how we are helping customers continue to transform their businesses, he said. We remain committed to helping organizations unlock the full potential of multi-cloud.

The company provided a breakdown of its revenue. It said software-as-a-service and license revenue rose 15% from a year ago, to $1.74 billion in the quarter. Subscription and SaaS revenue rose 22%, to $943 million. The company also reported subscription and SaaS annual recurring revenue rose 24%, to $3.89 billion.

Our Q2 financial results reflect the continued commitment of the entire VMware team to accelerate innovation for our customers as they move to a multicloud environment, said VMware Executive Vice President and Chief Financial Officer Zane Rowe.

Follow this link:
VMware posts solid earnings and revenue beat as it waits to be acquired by Broadcom - SiliconANGLE News

Read More..

Preparing a secure cloud environment in the digital new norm – Backend News

By Allen Guo, Country Manager for the Philippines, Alibaba Cloud Intelligence

As hybrid or remote working is being adopted by many companies globally and becoming the new norm for millions of workers, cyberattacks meanwhile continue unabated. In the past year, millions of workers in the Philippines, particularly those in work-from-home set-ups, were subject to cybersecurity threats. And in just the first half of this year, the countrys national Computer Emergency Response Program (CERT-PH), under the Cybersecurity Bureau of the Department of Information and Communications Technology (DICT), handled hundreds of cybersecurity incidents, the majority of which were APT cyberattacks and compromised websites and systems.

Building a secure and reliable IT environment has therefore become an increasingly important priority for many businesses who are exploring opportunities in the global digital economy. The same is underscored by the DICTs National Cybersecurity Plan 2022, which outlines the departments plan of action to protect not just businesses and supply chains, but also government networks and individuals from cyber risks.

For businesses and enterprises, moving to the cloud and using cloud-based security features is a good way to challenge cyber risks. But its also important to delve deeper into how best to construct a secure and reliable cloud environment that can fend off even the most determined attacker.

Allen Guo: Enabling enterprises to thrive in ever-growing digital landscapeForrester names Alibaba Cloud leader in public cloud container platform

In todays digital environment, discussions about cyber securitys best practices have never been more important. In this article, I would like to share some thoughts on how to create a secure cloud environment from building the architecture to adopting cutting-edge security technologies and putting in place important security management practices to inspire more thorough conversations on this subject.

Design the next-generation enterprise security architecture

A resilient and robust security architecture is essential for creating a cloud environment capable of assuring an organization about the availability, confidentiality, and integrity of its systems and data.

From the bottom up, the architecture should include security modules of different layers, so that companies can build trustworthy data security solutions on the cloud layer by layer from the infrastructure security, data security, and application security to business security layers.

In addition to the security modules of all of the layers, there are a variety of automated data protection tools that enable companies to perform data encryption, visualization, leakage prevention, operation log management, and access control in a secure computing environment. Enterprises can also leverage cloud-based IT governance solutions for custom designs of cloud security systems to meet compliance requirements from network security and data security to operation auditing and configuration auditing. This ensures full-lifecycle data security on the cloud, with controllable and compliant data security solutions in place.

Another consideration is to build a multi-tenant environment, abiding by the principle of least privilege and adopting consistent management and control standards to protect user data from unauthorized access. In addition, establishing strict rules for data ownership and operations on data, such as data access, retention, and deletion, is also pivotal in creating a safe environment.

Moreover, enterprises can embrace the zero-trust security architecture and build a zero-trust practice by design to protect the most sensitive systems. The architecture requires everything (including users, devices, and nodes) requesting access to internal systems to be authenticated and authorized using identity access protocols. As such, the zero-trust security architecture cuts down on automatic trust, or trust without continuous verification, addressing modern challenges in securing remote working environments, hybrid cloud settings, and increasingly aggressive cyber threats.

Adopt cutting-edge security technologies

Cutting-edge security technologies such as comprehensive data encryption, confidential computing, and many more emerging tech solutions, can be leveraged to ensure we stay on top of the trends in cybersecurity.

Comprehensive data encryption provides advanced data encryption capabilities on transmission links (i.e., data-in-motion), compute nodes (i.e., data-in-use), and storage nodes (i.e., data-at-rest). Key Management Service and Data Encryption Service help users securely manage their keys and use a variety of encryption algorithms to perform encryption operations.

Another emerging technology to safeguard the cloud environment is confidential computing. Confidential computing is dedicated to securing data in use while it is being processed, protecting users most sensitive workloads. Confidential computing based on trusted execution environments (TEEs), ensures data security, integrity, and confidentiality while simplifying the development and delivery of trusted or confidential applications at lower costs. At Alibaba Cloud, we apply confidential computing to the hardware layer, virtualization layer, container layer, and application layer, so that data can be protected in the most comprehensive way.

Security management practices in place

It is equally important to adopt proper security management practices and mechanisms to maximize the security protection of ones critical system and important data.

One essential mechanism to protect the cloud environment is to develop a comprehensive disaster recovery system, which enables businesses to configure emergency plans for data centers based on factors such as power, temperature, and disasters, and establish redundant systems for basic services such as cloud computing, network, and storage. It helps companies to deploy their business across regions and zones and build disaster recovery systems that support multiple recovery models.

Setting the effective reviewing and response mechanism for your cloud security issues is imperative. First, having vulnerability scanning and testing in place is important to assess the security status of systems; second, it is vital to use cloud-native monitoring tools to detect any anomalous behavior or insider threats; furthermore, establishing proper procedures and responsibility models to quickly and accurately assess where vulnerabilities exist and their severity, will help ensure that quick remedy actions can be taken when security problems emerge.

In the future, developing the security architecture, technologies, management, and response mechanism will no longer be perceived as a cost-center burden for companies, but rather, as critical capabilities to safeguard the performance and security of daily business operations. Crafting a comprehensive cloud security plan, adopting the best industrial practices, and choosing a professional cloud service provider with strong security credentials to work with, should be an imperative subjects in a CXOs agenda.

Alibaba Cloud, founded in 2009, is a cloud computing and artificial intelligence company, that provides services to enterprises, developers, and government organizations in more than 200 countries and regions. Committed to the success of its customers, Alibaba Cloud provides reliable and secure cloud computing and data processing capabilities as a part of its online solutions. In

Related Stories

View original post here:
Preparing a secure cloud environment in the digital new norm - Backend News

Read More..

Global Microduct Cable market stood at USD992.15 million in 2021 and is forecast to grow at a CAGR of 14.18% through 2027 to reach USD2130.57 million…

New York, Aug. 26, 2022 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Microduct Cable Market - Global Industry Size, Share, Trends, Competition, Opportunity, and Forecast, 2017-2027" - https://www.reportlinker.com/p05916953/?utm_source=GNW

Microduct cables are small plastic pipes which sub-divide the inner space of the pipe into smaller compartments where microwires can be blown, sprayed, or pushed in.Usually, microduct cables are small, flexible or semi-flexible ducts that provide simple, constant and low-friction paths for optical cables with relatively low pulling tension limits.

They are in compliance with current designs and building configurations, including cable blowing devices, for both riser and full-grade applications. It also enables cables to be securely deployed via pull lines or cords with less than 50 lbs of force and cable blowing with a typical 100-200 feet per minute deployment rate.Growing Construction and Electronics IndustryThe Government of India issued the National Electronics Policy 2018, with the goal of achieving domestic electronics manufacturing by 2025.This is likely to increase the uptake of modern technologies like 5G, IoT, AI, and machine learning.

Therefore, a new generation of fiber optic cable and high-speed connection technologies is laying the groundwork for 5G networks and thus increased demand for microduct cables.Even though they all are wireless technology, it necessitates a greater number of fibre and copper connections to connect equipment within the radio access network domain and back to the routing and core network architecture.

Furthermore, the demand for construction is increasing due to huge economic growth in developing countries and low interest rates in a number of developed countries.Also, factors such as rising private sector investments in construction, technological development, and rising disposable income are anticipated to propel the growth of the microducts cable market during the forecast period.

Moreover, increased infrastructure and housing spending by governments across the globe is responsible for the huge installation of microduct cables.

Advancements in Cloud Computing and communication networksCloud computing helps enterprises use remote servers hosted on the internet to store, manage, and process critical data.The increasing volume of data generation in websites and mobile apps, the rising focus on delivering customer-centric applications for driving customer satisfaction, and the growing need to control and reduce capital expenditure (CAPEX) and Operational Expenditure (OPEX) are a few factors driving the growth of the emerging technologies.

Emerging technologies, such as big data, AI, and Machine Learning (ML), are gaining traction, leading to the growth of cloud computing globally.Major factors, such as data security, faster Disaster Recovery (DR), and meeting compliance requirements, are driving the growth of cloud computing services.

As a result of advancements in cloud computing and video subscription services, as well as support for 5G, communication traffic has increased rapidly in recent years. Meanwhile, due to physical constraints in the internal spaces of ducts, there is a growing demand for thin ultra-high-density (UHD) fiber-optic cables that contain optical fibers at a high density, is anticipated to propel the microduct market growth.

Increasing Number of Data CentersReflecting on the recent advancement of cloud computing and other big-data processing technologies, a growing number of large-scale data centers are currently being constructed.As in future an increase in data transmission capacity between these facilities is expected, the demand for high-count, high-density optical cables like microduct cables is growing.

Optical cables that connect these data centers are usually installed in cable ducts located outdoors, which requires technology that allows high-density installation of these cables in a limited conduit space.To meet this demand, we have developed a series of high-fiber-count, high-density optical cables that are flexible in all directions.

Therefore, such cables high data transmission capacity is why most data centers adopt fiber optic telecom cables. Hence, this trend will likely boost the market for telecom and microduct cables in data centers.

Expansion of Fiber Optic Network to Connect Data CentersThe increased deployment of data centers is expected to fuel the market expansion of fiber optic cable installation and thus the microduct cable market.Fiber optic cables are used for intra-data center and inter-data center communications.

For intra-data center connectivity, data is transmitted within data centers located in buildings or on campuses using optical interconnects.Inter-data center optical interconnects, on the other hand, operate at the metro or long-haul interconnect levels because they connect two or more data centers.

The optical link between two data centers can be thousands of kilometers long and must transmit data at high speeds.As a result, massive amounts of data bandwidth are required for these data centers to send massive amounts of data over long distances.

As a result, the global market for microduct cables is being driven by the growing demand for bandwidth and power in data centers.

Market SegmentationGlobal Microduct Cable market can be segmented into Installation Environment, Type, Duct Type, Diameter, Material, Application, and Region.Based on Installation Environment, the market is segmented into Direct Buried, Duct/Direct Install, Aerial, Indoor.

Based on Type, the market is segmented into Smoothwall, Corrugated, Ribbed.Based on Duct Type, the market is segmented into Thick-Walled Ducts, Tight Protected Ducts.

Based on Diameter, the market is segmented into Up to 5mm, 5-10mm, 10-15mm, above 15mm.Based on Material, the market is segmented into PVC, HDPE, Nylon, Others.

Based on Application, the market is segmented into Electrification, Transmission Network Development, Telecoms, Automotive, Construction, Others.

Company ProfilesCorning Incorporated, Prysmian Group, Nexans S.A., Dura-Line Corporation, Hexatronic Group, Leoni AG, Fujikura Ltd., Emtelle UK Limited, Hyesung Cable & Communication Inc., Clearfield, Inc. are among the major market players in the Global Microduct Cable Market.

Years considered for this report:

Historical Years: 2017-2020Base Year: 2021Estimated Year:2022EForecast Period: 2023F2027F

Report Scope:

In this report, Global Microduct Cable market has been segmented into the following categories in addition to the industry trends which have also been listed below: Microduct Cable Market, By Installation Environment:o Direct Buriedo Duct/Direct Installo Aerialo Indoor Microduct Cable Market, By Type:o Smoothwallo Ribbedo Corrugated Microduct Cable Market, By Duct Type:o Thick-Walled Ductso Tight Protected Ducts Microduct Cable Market, By Diameter:o Up to 5mmo 5-10mmo 10-15mmo Above 15mm Microduct Cable Market, By Material:o HDPEo Nylono PVCo Others Microduct Cable Market, By Application:o Electrificationo Telecomso Transmission Network Developmento Automotiveo Constructiono Others Global Microduct Cable Market, By Region:o North AmericaUnited StatesCanadaMexicoo EuropeGermanyUnited KingdomFranceItalySpaino Asia-PacificChinaSouth KoreaJapanIndiaAustraliao Middle East and AfricaUAESaudi ArabiaSouth AfricaKuwaito South AmericaBrazilArgentinaColombia

Competitive Landscape

Company Profiles: Detailed analysis of the major companies present in Global Microduct Cable Market.

Available Customizations:

With the given market data, we offers customizations according to a companys specific needs. The following customization options are available for the report:

Company Information

Detailed analysis and profiling of additional market players (up to five).Read the full report: https://www.reportlinker.com/p05916953/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Read the rest here:
Global Microduct Cable market stood at USD992.15 million in 2021 and is forecast to grow at a CAGR of 14.18% through 2027 to reach USD2130.57 million...

Read More..

Local-feature and global-dependency based tool wear prediction using deep learning | Scientific Reports – Nature.com

In this section, an experiment was designed to test the performances of our proposed LFGD-TWP method.

The machining experiment was carried out in milling operation and the experimental equipment and materials used in this experiment are shown in Table 1. The cutting force acquisition system mainly consists of sensor, transmitter, receiver and PC. The sensor and signal transmitter are integrated into a toolholder, which can directly collect the force data during machining and send it out wirelessly. The signals are collected at a frequency of 2500Hz. The collected data from sensor is transmitted wirelessly to receiver, which in turn transmits the data to PC via USB cable. The signal collection process is shown in Fig.6.

The Anyty microscope was fixed inside the machine tool as shown in Fig.7. The coordinate where image of tool wear can be clearly taken is recorded into the CNC so that the spindle can move to this fixed position for wear measurement after each milling. This measurement method avoids the errors caused by repeated removal and installation of cutters, which improves the efficiency and accuracy of tool wear measurement. A sample photo of the microscope is shown in Fig.8.

A sample photo of tool wear.

Orthogonal experimental method was adopted in this paper in order to test the performances of our method under multiple working conditions. Tool wear experiments are conducted using nine cutters under nine different cutting parameters. The 9 cutters are marked as C1, C2,, C9. The milling parameters were set as shown in Table 2. The cutting width was fixed at 7mm. Each row in the table corresponds to a new cutter. Every 1000mm cutting was a cut and the tool wear was measured after every cut. Replace the cutter and cutting parameters when the previous tool wear exceeds the threshold or the cutter is broken.

The data acquisition files have three columns, corresponding to: bending moment in two directions (x, y) and torsion. Each cutter has a corresponding wear file. The wear file records the wear values of the four flutes corresponding to each cut. The cutting quality will become poor if the wear value of any edge exceeds a certain value. Therefore, this paper takes the maximal flank wear of all flutes as target.

Considering the multisensory input contain three channels, the bending moment in X direction is used as an example to illustrate the data preparation process in this paper. Firstly, the original signal of each cut is truncated to obtain the valid data segment containing 10,240 recorded values in the middle part of each signal. Finally, the data is equally divided into 10 segments based on practice, denoted as (X_{fx} = left[ {X_{1} ,X_{2} ,...,X_{10} } right]).

The maximum level of decomposition in DWT is related to the length of signals and the chosen wavelet. In this paper, db5 is used for decomposition and we select the optimal level of decomposition by comparing the performance under different levels of decomposition. Decomposition level 3, 4, 5 and 6 were chosen for comparison in this paper. The results showed that level 5 had the best performance. Therefore,(X_{1} ,X_{2} ,...,X_{10}) are converted to multi-scale spectrogram images respectively by 5-level wavelet decomposition using db5 based on the practice, denoted as (WS = [ws_{1} ,ws_{2} ,...,ws_{10} ]) where (ws = [c_{1} ,c_{2} ,...,c_{6} ]) with the length of [512, 256, 128, 64, 32, 32] is multi-scale vectors corresponding to each segment.

For each segment, 1D-CNNs are used to extract single-scale features from (c_{1} ,c_{2} ,...,c_{6}) respectively. The structure and parameters of the model are shown in Table 3.

The activation function of the convolution layer is ReLU. Every convolution layer of (c_{1} ,c_{2} ,c_{3} ,c_{4}) is followed by a max-pooling layer with region 12 to compress generated feature maps. The input channel of the model is set to 3 because of the three-channel sensory data.

After the single-scale Feature Extraction by 1D-CNNs and the concatenation of single-scale Features, a feature image of size ({32} times {6} times 32) is obtained, which is used as the input of our multi-scale correlation feature extraction model. Finally, the local feature size of each segment after automatic extraction is 150.

In this case, the dimension of automatic feature vector is 50, and the dimension of manual feature vector is 30. The adopted manual features are shown in Table 4. Therefore, the dimension of the hybrid features of each segment is 80.

The number of segments is T=10 so that the shape of the input sequence of Global Time Series Dependency Mining Model is 8010. The Mean Squared Error (MSE) was selected as the model loss during model training. An Adam optimizer32 is used for optimization in this paper and the learning rate is set to be 0.001. MSE was calculated on test data set for the models having one, two, and three layers and 100, 200, 300, 400, 500 hidden units. The results show that the most accurate model contained 2 layers and 300 hidden units in LSTM models and 400 hidden units in FC-Layer. In order to improve the training speed and alleviate the overfitting issues, we apply batch normalization (BN)33 to all convolution layers of Single-Scale Feature Extraction Model, and apply the dropout method34 to the fully connected layer. To get a relatively optimal dropout value, we set different values to train the model, i.e., p=0, p=0.25, p=0.5, p=0.75. Where p is the probability of an element to be zeroed. The results show that the dropout setting of 0.5 gives a relatively optimal result. After updating the parameters of the model with the training data, the trained model is applied on the testing data to predict tool wear.

In order to quantify the performance of our method, mean absolute error (MAE) and root mean squared error (RMSE) are adopted as measurement indicators to evaluate regression loss. The equations of MAE and RMSE over n testing records are given as follows:

$$ MAE = frac{1}{n}sumlimits_{i = 1}^{n} {left| {y_{i} - hat{y}_{i} } right|} , $$

(5)

$$ RMSE = sqrt {frac{1}{n}sumlimits_{i = 1}^{n} {(y_{i} - hat{y}_{i} )^{2} } } , $$

(6)

where (y_{i}) is predicted value and (hat{y}_{i}) is true value.

To analyze the performance of all our methods, cross validation is used to test the accuracy of the model in this paper. Eight cutter records are used as training sets and the rest one is used as testing set, until all cutters are used as testing set. Forexample, records of cutters C2, C3, , C9 are used as the training sets and records of cutter C1 are used as the testing set, the testing case is denoted as T1. Then the records of cutter C2 are used as the testing set, and the records of the rest cutter are used as the training sets, the testing case is denoted as T2. The rest can be done in the same manner. Nine different testing cases are shown in Table 5.

To mitigate the effects of random factors, each testing case is repeated 10 times and the average value is used as the result of the model. Moreover, in order to demonstrate the effectiveness of the hybrid features in this paper, two models are trained, namely the network with hybrid features and the network with automatic features only. The results of each testing cases are shown in Table 6.

It can be seen from Table 6 that our proposed LFGD-TWP achieves low regression error. In most cases, the model with hybrid features performs better than the model with automatic features only. By calculating the average performance improvement, we can reach a 3.69% improvement in MAE and a 2.37% improvement in RMSE. To qualitatively demonstrate the effectiveness of our model, the predicted tool wears of testing case T2 and T7 are illustrated in Fig.9. It can be seen from Fig.9 that the closer to the tool failure zone, the greater the error. The reason for this may be that the tool wears quicker at this stage, resulting in a relatively small number of samples. Or it could be that the signal changes more drastically and the noise is more severe due to the increasing tool wear, leading to greater error.

Tool wear predicted by LFGD-TWP.

Two statistics are adopted to illustrate the overall prediction performance and generalization ability of the model under different testing cases: mean and variance. Mean is the average value of the results under different testing cases. Obviously, it indicates the prediction accuracy of the method. Variance measures how far each result is from the mean and thus measures variability from the average or mean. It indicates the stability of generalization under different testing cases. The equations of mean and variance of two measurement indicators over n testing cases are given as follows:

$$ Mean = overline{r} = frac{1}{n}sumlimits_{i = 1}^{n} {r_{i} } , $$

(7)

$$ Variance = frac{1}{n}sumlimits_{i = 1}^{n} {left( {r_{i} - overline{r}} right)^{2} } , $$

(8)

where (r_{i}) is the mean value of the results for each testing case.

The definition of mean and variance shows that the smaller their values are, the better performance of the model will be. In our proposed method, the means of MAEs and RMSEs are 7.36 and 9.65, and the variances of MAEs and RMSEs are 0.95 and 1.65.

Other deep learning models are used to compare model performance with the proposed LFGD-TWP. They are CNN24, and LSTM30 and CNN-BiLSTM19, and the structure of these models are shown as follows.

Structure of CNN model in brief: The input of CNN model is the original signal after normalization, and the signal length is 1024. The input channel of the model is set to 3 because of the three-channel sensory data. CNN model has 5 convolution layers. Each convolutional layer has 32 feature maps and 14 filters which is followed by a max-pooling with region 12. Then flatten the feature maps. Finally, it is followed by a fully connected layer, which has 250 hidden layer units. The dropout operation with probability 0.5 is applied to the fully connected layer. The loss function is MSE, the optimizer function is Adam, the learning rate is set to be 0.001, which are kept the same as the proposed model. The means of MAEs and RMSEs are 12.64 and 16.74, and the variances of MAEs and RMSEs are 10.74 and 18.90.

Structure of LSTM model in brief: The model is of type many to one. The input of LSTM is the manual features in Table 4. Therefore, an LSTM cell has an input dimension of 30. The MAE and RMSE values were calculated for models with one, two, and three layers and 100, 200, 300, 400 hidden units. Therefore, 12 structures of an LSTM model were constructed for the most accurate model. Also, the timesteps are 10, the loss function is MSE, the optimizer function is Adam, the learning rate is set to be 0.001, which are kept the same as the proposed model. The results show that the most accurate model contained 2 layers and 200 hidden units. The means of MAEs and RMSEs are 10.48 and 13.76, and the variances of MAEs and RMSEs are 5.12 and 9.28.

Structure of CNN-BiLSTM model is shown in Ref.19, and the input of this model is the original signal after normalization. The means of MAEs and RMSEs of this model are 7.85 and 10.24, and the variances of MAEs and RMSEs are 2.71 and 5.06. Comparison results of our method (LFGD-TWP) and popular models are shown in Table 7. Compared to the most competitive result achieved by CNN-BiLSTM, the proposed model achieves a better accuracy owing to the multi-frequency-band analysis structure. Further, it can be seen that the proposed model achieves lower variances in MAE and RMSE. It means that the proposed model has better overall prediction performance and better stability of generalization under different testing cases by comparing the variance of the results.

To further test the performance of our proposed method, we additionally use the PHM2010 data set35, which is a widely used benchmark. The machining experiment was carried out in milling operation and the experimental equipment and materials used in this experiment are shown in Ref.19. The running speed of the spindle is 10,400 r/min; the feed rate in x-direction is 1555mm/min; the depth of cut (radial) in y-direction is 0.125mm; the depth of cut (axial) in z-direction is 0.2mm. There are 6 individual cutter records named C1, C2,, C6. Each record contains 315 samples (corresponding to 315 cuts), and the working conditions remain unchanged. C1, C4, C6 each has a corresponding wear file. Therefore, C1, C4, C6 are selected as our training/testing dataset. Also, cross validation is used to test the accuracy of the model and the results are shown in Fig.10.

Tool wear (PHM2010) predicted by LFGD-TWP.

In our proposed method, the mean of MAEs is 6.65, the mean of RMSEs is 8.42. Compared with the mean value of MAEs (6.57) and RMSEs (8.1) in Ref.19. The reason for the slightly poor performance may be that in order to enhance the adaptability to multiple working conditions, the architecture of the model is more complex, which leads to overfitting. Although the proposed architecture might overfit the PHM2010 case, the complexity of the architecture ensures that more complex scenarios like the test cases in the paper can be handled.

Read more:

Local-feature and global-dependency based tool wear prediction using deep learning | Scientific Reports - Nature.com

Read More..

Understanding the genetics of viral drug resistance by integrating clinical data and mining of the scientific literature | Scientific Reports -…

Polaris Observatory Collaborators. Global prevalence, treatment, and prevention of hepatitis B virus infection in 2016: A modelling study. Lancet Gastroenterol. Hepatol. 3, 383403 (2018).

Article Google Scholar

Sarin, S. K. et al. Asian-pacific clinical practice guidelines on the management of hepatitis B: A 2015 update. Hepatol Int. 10, 198 (2016).

CAS PubMed Article Google Scholar

World Health Organization. Hepatitis B. World Health Organization https://www.who.int/en/news-room/fact-sheets/detail/hepatitis-b (2019).

Lok, A. S. F. et al. Antiviral therapy for chronic hepatitis B viral infection in adults: A systematic review and meta-analysis. Hepatology 63, 284306 (2016).

CAS PubMed Article Google Scholar

Spradling, P., Hu, D. & McMahon, B. J. Epidemiology and prevention. In Viral Hepatitis (eds Thomas, H. et al.) (Wiley, 2013).

Google Scholar

Tang, L. S. Y., Covert, E., Wilson, E. & Kottilil, S. Chronic hepatitis B infect a review. JAMA 319, 18021813 (2018).

CAS PubMed Article Google Scholar

Liang, T. J. Hepatitis B: The virus and disease. Hepatology 9, S13S21 (2009).

Article CAS Google Scholar

Kay, A. & Zoulim, F. Hepatitis B virus genetic variability and evolution. Virus Res. 127, 164167 (2007).

CAS PubMed Article Google Scholar

Buti, M., Rodriguez-Frias, F., Jardi, R. & Esteban, R. Hepatitis B virus genome variability and disease progression: The impact of pre-core mutations and HBV genotypes. J. Clin. Virol. 34, S79S82 (2005).

CAS PubMed Article Google Scholar

Beck, J. & Nassal, M. Hepatitis B virus replication. World J. Gastroenterol. 13, 4864 (2007).

CAS PubMed PubMed Central Article Google Scholar

Orito, E. et al. Host-independent evolution and a genetic classification of the hepadnavirus family based on nucleotide sequences. Proc. Natl. Acad. Sci. USA 86, 70597062 (1989).

ADS CAS PubMed PubMed Central Article Google Scholar

Cross, J. C., Wen, P. & Rutter, W. J. Transactivation by hepatitis B virus X protein is promiscuous and dependent on mitogen-activated cellular serine/threonine kinases. Proc. Natl. Acad. Sci. USA 90, 80788082 (1993).

ADS CAS PubMed PubMed Central Article Google Scholar

Hu, Z., Zhang, Z., Kim, J. W., Huang, Y. & Liang, T. J. Altered proteolysis and global gene expression in hepatitis B virus X transgenic mouse liver. J. Virol. 80, 14051413 (2006).

CAS PubMed PubMed Central Article Google Scholar

Milich, D. & Liang, T. J. Exploring the biological basis of hepatitis B e antigen in hepatitis B virus infection. Hepatology 38, 10751086 (2003).

CAS PubMed Article Google Scholar

Zhang, Z., Torii, N., Hu, Z., Jacob, J. & Liang, T. J. X-deficient woodchuck hepatitis virus mutants behave like attenuated viruses and induce protective immunity in vivo. J. Clin. Invest. 108, 15231531 (2001).

CAS PubMed PubMed Central Article Google Scholar

Wishart, D. S. et al. DrugBank 5.0: A major update to the DrugBank database for 2018. Nucleic Acids Res. 46, D1074D1082 (2018).

CAS PubMed Article Google Scholar

Dienstag, J. L. et al. Lamivudine as initial treatment for chronic hepatitis B in the United States. N. Engl. J. Med. 341, 12561263 (1999).

CAS PubMed Article Google Scholar

Chan, H. L. et al. Two-year lamivudine treatment for hepatitis B e antigen-negative chronic hepatitis B. a double-blind, placebo-controlled trial. Antivir. Ther. 12, 345353 (2007).

CAS PubMed Article Google Scholar

Marcellin, P. et al. Long-term efficacy and safety of adefovir dipivoxil for the treatment of hepatitis B e antigen-positive chronic hepatitis B. Hepatology 48, 750758 (2008).

CAS PubMed Article Google Scholar

Hadziyannis, S. J. et al. Long-term therapy with adefovir dipivoxil for HBeAg-negative chronic hepatitis B for up to 5 years. Gastroenterology 131, 17431751 (2006).

CAS PubMed Article Google Scholar

Tenney, D. J. et al. Long-term monitoring shows hepatitis B virus resistance to entecavir in nucleoside-nave patients is rare through 5 years of therapy. Hepatology 49, 15031514 (2009).

CAS PubMed Article Google Scholar

Buti, M. et al. Seven-year efficacy and safety of treatment with tenofovir disoproxil fumarate for chronic hepatitis B virus infection. Dig Dis Sci. 60, 14571464 (2015).

CAS PubMed Article Google Scholar

Bui, Q. C., Nuallin, B. O., Boucher, C. A. & Sloot, P. M. Extracting causal relations on HIV drug resistance from literature. BMC Bioinf. 11, 101 (2010).

Article CAS Google Scholar

Khalid, Z. & Sezerman, O. U. ZK DrugResist 20: A TextMiner to extract semantic relations of drug resistance from PubMed. J Biomed Inform. 69, 9398 (2017).

PubMed Article Google Scholar

Naderi, N. & Witte, R. Automated extraction and semantic analysis of mutation impacts from the biomedical literature. BMC Genomics. 13(Suppl 4), S10 (2012).

PubMed PubMed Central Article Google Scholar

Roberts, K. et al. TREC-COVID: Rationale and structure of an information retrieval shared task for COVID-19. J. Am. Med. Inform. Assoc. 27, 14311436 (2020).

PubMed PubMed Central Article Google Scholar

Davey, N. E. et al. The HIV mutation browser: A resource for human immunodeficiency virus mutagenesis and polymorphism data. PLoS Comput. Biol. 10, e1003951 (2014).

PubMed PubMed Central Article Google Scholar

Wang, Y. et al. ViMIC: A database of human disease-related virus mutations, integration sites and cis-effects. Nucleic Acids Res. https://doi.org/10.1093/nar/gkab779 (2021).

Article PubMed PubMed Central Google Scholar

Mueller-Breckenridge, A. J. et al. Machine-learning based patient classification using Hepatitis B virus full-length genome quasispecies from Asian and European cohorts. Sci Rep. 9, 18892 (2019).

ADS CAS PubMed PubMed Central Article Google Scholar

Caporaso, J. G., Baumgartner, W. A. Jr., Randolph, D. A., Cohen, K. B. & Hunter, L. MutationFinder: A high-performance system for extracting point mutation mentions from text. Bioinformatics 23, 18621865 (2007).

CAS PubMed Article Google Scholar

Doughty, E. et al. Toward an automatic method for extracting cancer- and other disease-related point mutations from the biomedical literature. Bioinformatics 27, 408415 (2011).

CAS PubMed Article Google Scholar

Thomas, P., Rocktschel, T., Hakenberg, J., Lichtblau, Y. & Leser, U. SETH detects and normalizes genetic variants in text. Bioinformatics 32, 28832885 (2016).

CAS PubMed Article Google Scholar

National Center for Biotechnology Information: U.S. National Library of Medicine. Hepatitis B Virus. NCBI https://www.ncbi.nlm.nih.gov/genome/?term=hbv (2019).

Hayer, J. et al. HBVdb: a knowledge database for Hepatitis B Virus. Nucleic Acids Res. 41, D566D570 (2013).

CAS PubMed Article Google Scholar

Yuen, L. K. et al. SeqHepB: A sequence analysis program and relational database system for chronic hepatitis B. Antiviral Res. 75, 6474 (2007).

CAS PubMed Article Google Scholar

EuResist Network GEIE. EuResist Network. euresist network https://www.euresist.org/ (2021).

Python. xml.etree.ElementTree - The Element Tree XML API. Python. https://docs.python.org/3/library/xml.etree.elementtree.html (2019).

National Center for Biotechnology Information: U.S. National Library of Medicine. PMC. NCBI https://www.ncbi.nlm.nih.gov/pmc/ (2019).

World Medical Association. World medical association declaration of Helsinki. JAMA 310, 21912194 (2013).

Article CAS Google Scholar

Vijayananthan, A. & Nawawi, O. The importance of Good Clinical Practice guidelines and its role in clinical trials. Biomed. Imaging Interv. J. 4, e5 (2008).

CAS PubMed PubMed Central Google Scholar

Clarivate Analytics. Integrity. Clarivate Analytics https://integrity.clarivate.com//integrity/xmlxsl/ (accessed on August 14, 2019).

GitHub. HBV_Code. GitHub https://github.com/angoto/HBV_Code (2021).

Yasutake, Y. et al. HIV-1 with HBV-associated Q151M substitution in RT becomes highly susceptible to entecavir: Structural insights into HBV-RT inhibition by entecavir. Sci Rep. 8, 16241624 (2018).

ADS PubMed PubMed Central Article CAS Google Scholar

Yasutake, Y. et al. Active-site deformation in the structure of HIV-1 RT with HBV-associated septuple amino acid substitutions rationalizes the differential susceptibility of HIV-1 and HBV against 4-modified nucleoside RT inhibitors. Biochem. Biophys. Res. Commun. 509, 943948 (2019).

CAS PubMed Article Google Scholar

Bienert, S. et al. The SWISS-MODEL Repository new features and functionality. Nucleic Acids Res. 45, D313D319 (2017).

CAS PubMed Article Google Scholar

UniProt. UniProtKB - Q9WRJ9 (Q9WRJ9_HBV). UniProt https://www.uniprot.org/uniprot/Q9WRJ9 (2020).

The PyMOL Molecular Graphics System, Version 2.3.0 Schrdinger, LLC.

Sevier, C. S. & Kaiser, C. A. Formation and transfer of disulfide bonds in living cells. Nat. Rev. Mol. Cell Biol. 3, 836847 (2002).

CAS PubMed Article Google Scholar

Nassal, M., Rieger, A. & Steinau, O. Topological analysis of the hepatitis B virus core particle by cysteine-cysteine cross-linking. J. Mol. Biol. 225, 10131025 (1992).

CAS PubMed Article Google Scholar

Mangold, C. M. & Streeck, R. E. Mutational analysis of the cysteine residues in the hepatitis B virus small envelope protein. J. Virol. 67, 45884597 (1993).

CAS PubMed PubMed Central Article Google Scholar

Mangold, C. M., Unckell, F., Werr, M. & Streeck, R. E. Analysis of intermolecular disulfide bonds and free sulfhydryl groups in hepatitis B surface antigen particles. Arch. Virol. 142, 22572267 (1997).

CAS PubMed Article Google Scholar

Mangold, C. M., Unckell, F., Werr, M. & Streeck, R. E. Secretion and antigenicity of hepatitis B virus small envelope proteins lacking cysteines in the major antigenic region. Virology 211, 535543 (1995).

CAS PubMed Article Google Scholar

Langley, D. R. et al. Inhibition of hepatitis B virus polymerase by entecavir. J. Virol. 81, 39924001 (2007).

CAS PubMed PubMed Central Article Google Scholar

Huang, H., Chopra, R., Verdine, G. L. & Harrison, S. C. Structure of a covalently trapped catalytic complex of HIV-1 reverse transcriptase: implications for drug resistance. Science 282, 16691675 (1998).

ADS CAS PubMed Article Google Scholar

Guermouche, H. et al. Characterization of the dynamics of human cytomegalovirus resistance to antiviral drugs by ultra-deep sequencing. Antiviral Res. 173, 104647 (2020).

More here:

Understanding the genetics of viral drug resistance by integrating clinical data and mining of the scientific literature | Scientific Reports -...

Read More..

Predictive Analytics: The Holy Grail in Business Intelligence – Newsweek

In today's world, businesses have a wealth of data at their fingertips. However, data will be of no use to a business if it is not utilized to gain insights and make informed decisions to enhance business operations. Business intelligence, or BI, helps businesses achieve this goal. BI is a technology-driven way of analyzing information and delivering actionable insights that can help managers, executives and end users gain detailed insights that aid them in making decisions. It helps people to assist in making decisions on what they can do for getting insights.

While traditional BI tools primarily monitor historical data and current data, predictive analytics utilizes data, statistical algorithms, data mining methodologies, analytical techniques and machine learning algorithms to determine the likely outcomes based on historical data to provide insights into the future. In addition to being able to determine what has happened in the past and why it happened, predictive analytics also helps you understand what could happen in the future. By identifying opportunities, they allow businesses to be proactive and agile.

For businesses, predictive analysis is crucial. Digital transformations and increased competition have made companies more competitive than ever before. Using predictive analysis is like having a strategic vision of the future, mapping out the opportunities and threats. Therefore, companies should look for predictive models that:

Any industry can use predictive analytics to forecast sales, detect risks, and improve sales operations. Predictive analytics can also be used to detect fraud, evaluate credit risk or find new investment opportunities in a financial institution. Using predictive analytics, manufacturers can identify factors that result in quality reduction, production failures, and distribution risks.

With predictive analytics, sales forecasting can create real value for businesses. Many other business decisions are influenced by accurate sales forecasts. However, sales forecasting is still a time-consuming activity for sales professionals who often rely on Excel spreadsheets and other tools that do not provide sufficient analytics and insights to make accurate sales forecasts. As a result of advanced predictive analytics, sales professionals can automate rolling forecasts and have more transparency and smarter decision support.

Using an ensemble of machine learning algorithms, AI-based forecasting optimizes forecasts. Depending on which business metric you're forecasting, the system selects a model that's uniquely suitable. The process consists of a series of steps:

Regardless of the business model, forecasting is extremely important for businesses as it creates some insurance for future business outcomes. In addition to detecting and mitigating potential issues in advance, it helps organizations make informed decisions and set budgets and business goals. AI helps businesses oversee all these aspects with increased accuracy in the forecasting process.

Read this article:

Predictive Analytics: The Holy Grail in Business Intelligence - Newsweek

Read More..