Page 2,967«..1020..2,9662,9672,9682,969..2,9802,990..»

Canonical debuts new Ubuntu with Active Directory integration, support for SQL Server and Flutter – SiliconANGLE News

Ubuntu developer Canonical Ltd. debuted an important new release of its flagship operating system today, adding key capabilities such as Microsoft Active Directory integration, use of the Wayland display protocol by default and a new software development kit for the Flutter framework.

The Ubuntu distro is one of the most popular Linux operating systems. It has a big presence in the enterprise, used to run virtual machines, servers and cloud computing services, as well as personal devices and robots.

Canonical, which leads Ubuntus development, also revealed today that its working with5 former rival Microsoft Corp. on performance optimization and joint support for Microsoft SQL Server on Ubuntu.

The company said the integration with Microsoft Active Directory in Ubuntu 21.04 is an important milestone that makes it possible for Ubuntu machines to join an Active Directory domain at the time of installation to enable central configuration. Active Directory is a technology thats used to manage fleets of computers and devices on a network. With the integration, Active Directory administrators can now manage Ubuntu workstations and simplify compliance with company policies, Canonical said.

In addition, Ubuntu 21.04 gains the ability to configure system settings from an Active Directory domain controller, the company said. So, with a Group Policy Client, administrators can now specify security policies on all connected client devices, such as password and user access control policies. It also works for desktop environment settings such as login screen, background and favorite apps.

Meanwhile, Ubuntu is taking a significant leap forward in security through its default use of Wayland for graphics, Canonical said.

Firefox, OBS Studio and many applications built with Electron and Flutter take advantage of Wayland automatically, for smoother graphics and better fractional scaling, the company stated.

The new Flutter SDK is another milestone. Flutter is a software framework thats used by developers to build native apps on multiple operating systems, including Android, iOS, Windows and MacOS. The idea is that they can write their apps just once using Googles Dart programming language and have them run across all of those platforms, without needing to tinker with the code for each version.

The framework is designed to enable what Google calls ambient computing. Thats where people can access their favorite apps and services from any location, be it at home or at work, on any kind of device, using a consistent set of methods and commands.

Canonical, which first revealed it was working to support Flutter last year, said the new Flutter SDK snap build integration means the framework is now compatible with the Ubuntu platform too, and that developers can now use it to publish multi-platform apps for one-click installation on numerous Linux devices.

The support for Microsoft SQL Server is a big deal too. Canonical said SQL Servers database management system and its command line interface are now both available on optimized Ubuntu images on Microsoft Azure, providing Ubuntu users with access to Microsofts renowned, high-performance and extremely reliable database platform.

Canonical said it and Microsoft will provide joint support for Ubuntu with Microsoft SQL Server thats deployed on-premises or through the Azure Cloud Marketplace for mission-critical workloads.

Native Active Directory integration and certified Microsoft SQL Server on Ubuntu are top priorities for our enterprise customers. said Canonical Chief Executive Mark Shuttleworth. For developers and innovators, Ubuntu 21.04 delivers Wayland and Flutter for smoother graphics and clean, beautiful, design-led cross platform development.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Continued here:
Canonical debuts new Ubuntu with Active Directory integration, support for SQL Server and Flutter - SiliconANGLE News

Read More..

A Reference Architecture for Fine-Grained Access Management on the Cloud – InfoQ.com

Key Takeaways

Access management is the process of identifying whether a user, or a group of users, should be able to access a given resource, such as a host, a service, or a database. For example, is it okay for a developer to be able to log in to a production application server using SSH, and if so then for how long? If an SRE is attempting to access a database during off-call hours, should they be allowed to do so? If a data engineer has moved to a different team, should they continue having access to the ETL pipelines S3 buckets?

Before the proliferation of various infrastructure and data services on the cloud, access management was a relatively simple problem for DevOps and Security teams to solve. VPNs and bastion hosts were (and still are) the preferred mechanisms to cordon off all critical resources at the network level. Users first authenticate with the VPN server, or log on to the bastion host, before they can access any resource on the private network.

This works well when the resources are static and their number relatively small. However, as more and more resources dynamically spring up in different parts of the private network, the VPN / bastion host solutions become untenable.

Specifically, there are three areas where VPNs and bastion hosts fall short as an effective mechanism for access management.

In this article, we will define a new reference architecture for cloud-native companies that are looking for a simplified access management solution for their cloud resources, from SSH hosts, databases, data warehouses, to message pipelines and cloud storage endpoints.

It solves the following specific challenges VPNs and bastion hosts arent able to overcome:

Additionally, it enables the following business benefits for organizations with sensitive data:

The architecture is built upon the following three core principles, whose implementation allows DevOps and Security teams to exercise full control over all of their environment while improving user productivity with a simple and consistent experience.

The following figure shows the reference architecture and its components.

The VPN / bastion host from the previous figure has been replaced with anAccess Gateway. TheAccess Gatewayis actually a collection of micro-services and is responsible for authenticating individual users, authorizing their requests based on certain attributes, and ultimately granting them access to the infrastructure and data services in the private network.

Next, lets look at the individual components to see how the core principles outlined before are accomplished.

The key insight underpinning this architecture is the delegation of user authentication to a single service (theAccess Controller) rather than placing that responsibility with each service to which the user may need access. This kind of federation is commonplace in the world of SaaS applications. Having a single service be responsible for authentication simplifies user provisioning and de-provisioning for application owners and accelerates application development.

TheAccess Controlleritself will typically integrate with an identity provider, such asAuth0orOkta, for the actual authentication sequence, thus providing a useful abstraction across a wide array of providers and protocols. Ultimately, the identity provider guaranteesnon-repudiationof the users identity in the form of a signed SAML assertion, a JWT token, or an ephemeral certificate. This obviates the need to rely on a trusted subnet as a proxy for the users identity. It also allows configuring access policies down to the granularity of a service unlike VPNs which permissively grant users access to all services on the network.

An additional advantage of delegating authentication to identity providers is that users can be authenticated using zero trust principles. Specifically, identity provider policies can be created to enforce the following:

While theAccess Controllerenforces authentication for users, thePolicy Engineenforces fine-grained authorization on their requests. It accepts authorization rules in a human-friendly YAML syntax (check out examples at the end) and evaluates them on user requests and responses.

The Open Policy Agent (OPA), an open-source CNCF project, is a great example of a policy engine. It can be run as a microservice on its own or used as a library in the process space of other microservices. Policies in OPA are written in a language called Rego. Alternatively, its easy to build a simple YAML interface on top of Rego to simplify policy specifications.

Having an independent policy engine separate from the security models of the infrastructure and data services themselves is advantageous for the following reasons:

Both the Infrastructure Gateway and Data Gateway depend on the Policy Engine for evaluating infrastructure and data activity, respectively, by users.

TheInfrastructure Gatewaymanages and monitors accesses to infrastructure services such SSH servers and Kubernetes clusters. It interfaces with the Policy Engine to determine granular authorization rules and enforces them on all infrastructure activity during a user session. For load balancing purposes, the gateway may comprise a set of worker nodes, be deployed as an auto-scaling group on AWS, or run as a replica set on a Kubernetes cluster.

Hashicorp Boundaryis an example of an Infrastructure Gateway. Itsan open source project that enables developers, DevOps, and SREs to securely access infrastructure services (SSH servers, Kubernetes clusters) with fine-grained authorization without requiring direct network access while precluding the use of VPNs or bastion hosts.

The Infrastructure Gateway understands the various wire protocols used by SSH servers and Kubernetes clients, and provides the following key capabilities:

This involves making a copy of every command executed by the user during a session. The captured commands will typically be annotated with additional information, such as the identity of the user, the various identity provider groups they belong to, the time of the day, the duration of the command, along with a characterization of the response (whether it was successful, whether there was an error, whether data was read or written to, etc.).

Monitoring takes the notion of session recording to the next level. In addition to capturing all commands and responses, the Infrastructure Gateway applies security policies on the users activity. In the case of a violation, it may choose to trigger an alert, block the offending command and its response, or terminate the users session altogether.

TheData Gatewaymanages and monitors accesses to data services such hosted databases such as MySQL, PostgreSQL and MongoDB, DBaaS endpoints such as AWS RDS, data warehouses such as Snowflake and Bigquery, cloud storage such as AWS S3, and message pipelines such as Kafka and Kinesis. It interfaces with the Policy Engine to determine granular authorization rules and enforces them on all data activity during a user session.

Similar to the Infrastructure Gateway, the Data Gateway may comprise a set of worker nodes, be deployed as an auto-scaling group on AWS, or run as a replica set on a Kubernetes cluster.

Due to the wider variety of data services compared to infrastructure services, a Data Gateway will typically have support for a large number of wire protocols and grammars.

An example of such a Data Gateway isCyral,a lightweight interception service and is deployed as a sidecar for monitoring and governing access to modern data endpoints such as AWS RDS, Snowflake, Bigquery, AWS S3, Apache Kafka, etc. Its capabilities include:

This is similar to recording infrastructure activity and involves making a copy of every command executed by the user during a session and annotating with rich audit information.

Again, this is similar to monitoring infrastructure activity. For example, the policy below blocks data analysts from reading sensitive PII of customers.

Unlike infrastructure services, data services grants users read and write access to sensitive data related to customers, partners, and competitors that often resides in databases, data warehouses, cloud storage, and message pipelines. For privacy reasons, a very common requirement for aData Gatewayis the ability to scrub (also known as tokenization or masking) PII such as emails, names, social security numbers, credit card numbers, and addresses.

Lets look at some common access management scenarios to understand how the Access Gateway architecture provides fine-grained control compared to using VPNs and bastion hosts.

Heres a simple policy to monitor privileged activity across all infrastructure and data services in a single place:

The next policy shows an example of enforcing zero standing privileges -- a paradigm where no one has access to an infrastructure or data service by default. Access may be obtained only upon satisfying one or more qualifying criteria:

The last policy shows an example of data governance involving data scrubbing:

We saw that for highly dynamic cloud environments, VPNs and bastion hosts are inadequate as effective access management mechanisms in agile cloud environments. A new access management architecture with a focus on a non-repudiable user identity, short-lived certificates or tokens, and a centralized fine-grained authorization engine effectively solves the challenges that VPNs and bastion hosts fail to solve. In addition to providing a comprehensive security for users accessing critical infrastructure and data services, the architecture helps organizations achieve their audit, compliance, privacy and governance objectives.

We also discussed a reference implementation of the architecture using well-known developer focussed open-source solutions such as Hashicorp Boundary and OPA in conjunction with Cyral, a fast and stateless sidecar for modern data services. Together they can provide a fine-grained andeasy to use access management solution on the cloud.

Manav Mital is the co-founder and CEO of Cyral, the first cloud-native security service that delivers visibility, access control and protection for the Data Cloud. Founded in2018, Cyral works with organizations of all kindsfrom cloud-native startups to Fortune 500enterprises as they embrace DevOps culture and cloud technologies for managing and analyzing their data. Manav has a MS in Computer Science from UCLA and a BS in Computer Science from the Indian Institute of Technology, Kanpur.

View post:
A Reference Architecture for Fine-Grained Access Management on the Cloud - InfoQ.com

Read More..

Cloud Infrastructure Service Market to Create Lucrative Opportunities for Existing Companies as Well as New Players KSU | The Sentinel Newspaper -…

The cloud infrastructure for data storage offers numerous options for sourcing, approach and control. It brings well-defined set of services that are perceived by customers to have continuous availability, infinite capacity, improved cost efficiency and increased agility. To attain these attributes in customers minds, information technology (IT) must move its traditional server centric approach to service centric approach. This entails that IT must go from organizing applications in silos with the minimal leverage among environments to deliver applications on a pre-determined standardized platforms with agreed service levels. A hybrid strategy that uses numerous cloud options at the same time would become a norm since organizations choose a mix of several cloud models to meet the specific needs.

To remain ahead of your competitors, request for a sample here@https://www.persistencemarketresearch.com/samples/5090

Infrastructure-as-a-Service (IaaS) is a model where an organization is able to outsource the equipment used to support operations, including hardware, storage, networking components and servers. The service provider owns equipment and is accountable for running, housing and maintaining it. Cloud IaaS adoption is growing, as enterprises are turning to cloud based IT model to decrease the capital expenditure. Characteristics and several components of IaaS are:

Cloud infrastructure-as-a-service is among three fundamental service models of the cloud computing beside Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). As with cloud computing services, it offers access to the computing resource in virtualized environment, the Cloud, across public connection, normally the internet. In case of IaaS, the computing resource offered is particularly that of virtualized hardware, in the other words, computing infrastructure. The definition involves offerings such as virtual server space, bandwidth, network connections, IP addresses and load balancers. The pool of hardware resource is dragged from a multitude of networks and servers normally distributed among several data centers, all of that the cloud provider is responsible for maintaining. The client, on other hand, is given access to the virtualized components in order to create their own IT platforms.

Some important factors supporting the growth of cloud infrastructure-as-a-service market include decreased IT structure, disaster recovery plans and support for business continuity, improved compliance and security profile, and reduced IT staff. Cloud IaaS helps reduce complexity by elimination of software, servers, disaster recovery and backups. However, concerns about application reliability and performance, security risks and unwillingness to retreat controls are factors act as a challenge to this market.

Some of the major players for cloud infrastructure as a service market include Amazon Web Services, Bluelock, CA Technologies, Cloud Scaling, Datapipe Inc., Rackspace, Hewlett Packard, Logicworks, GoGrid, Layeredtech, Verizon, Savvis, OpSource and NaviSite among others. Amazon Web Services is the market leader in this market followed by Rackspace and Verizon.

To receive extensive list of important regions, ask for TOC here@https://www.persistencemarketresearch.com/toc/5090

Key geographies evaluated in this report are:

Key features of this report

Pre-Book Right Now for Exclusive Analyst Support @https://www.persistencemarketresearch.com/checkout/5090

About us:

Business intelligence is the foundation of every business model employed by Persistence Market Research. Multi-dimensional sources are being put to work, which include big data, customer experience analytics, and real-time data collection. Thus, working on micros by Persistence Market Research helps companies overcome their macro business challenges.

Persistence Market Research is always way ahead of its time. In other words, it tables market solutions by stepping into the companies/clients shoes much before they themselves have a sneak pick into the market. The pro-active approach followed by experts at Persistence Market Research helps companies/clients lay their hands on techno-commercial insights beforehand, so that the subsequent course of action could be simplified on their part.

Strategic assets form the repository of Persistence Market Researchs industry-specific solutions. This is evident from the range of clients right from fast-growing startups to Fortune 500 companies looking upon Persistence Market Research as their trusted solution-partner.

Contact us:

Persistence Market Research

305 Broadway, 7th FloorNew York City, NY 10007United States

Ph.no.+1-646-568-7751

E-mail id-sales@persistencemarketresearch.com

Original post:
Cloud Infrastructure Service Market to Create Lucrative Opportunities for Existing Companies as Well as New Players KSU | The Sentinel Newspaper -...

Read More..

GCP, AWS to become main drivers of global server demand – evertiq.com

Jakub Jirsak Dreamstime.comAnalysis | April 19, 2021

As such, the rise of CSPs have in turn brought about a gradual shift in the prevailing business model of server supply chains from sales of traditional branded servers (that is, server OEMs) to ODM Direct sales instead. Incidentally, the global public cloud market operates as an oligopoly dominated by North American companies including Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP), which collectively possess an above-50% share in this market. More specifically, GCP and AWS are the most aggressive in their data center build-outs. Each of these two companies is expected to increase its server procurement by 25-30% YoY this year, followed closely by Azure.TrendForce indicates that, in order to expand the presence of their respective ecosystems in the cloud services market, the aforementioned three CSPs have begun collaborating with various countries domestic CSPs and telecom operators in compliance with data residency and data sovereignty regulations. For instance, thanks to the accelerating data transformation efforts taking place in the APAC regions, Google is ramping up its supply chain strategies for 2021. As part of Googles efforts at building out and refreshing its data centers, not only is the company stocking up on more weeks worth of memory products, but it has also been increasing its server orders since 4Q20, in turn leading its ODM partners to expand their SMT capacities. As for AWS, the company has benefitted from activities driven by the post-pandemic new normal, including WFH and enterprise cloud migrations, both of which are major sources of data consumption for AWS public cloud.Conversely, Microsoft Azure will adopt a relatively more cautious and conservative approach to server procurement, likely because the Ice Lake-based server platforms used to power Azure services have yet to enter mass production. In other words, only after these Ice Lake servers enter mass production will Microsoft likely ramp up its server procurement in 2H21, during which TrendForce expects Microsofts peak server demand to take place, resulting in a 10-15% YoY growth in server procurement for the entirety of 2021. Finally, compared to its three competitors, Facebook will experience a relatively more stable growth in server procurement owing to two factors. First, the implementation of GDPR in the EU and the resultant data sovereignty implications mean that data gathered on EU residents are now subject to their respective countrys legal regulations, and therefore more servers are now required to keep up the domestic data processing and storage needs that arise from the GDPR. Secondly, most servers used by Facebook are custom speced to the companys requirements, and Facebooks server needs are accordingly higher than its competitors. As such, TrendForce forecasts a double-digit YoY growth in Facebooks server procurement this year.Chinese CSPs are limited in their pace of expansions, while Tencent stands out with a 10% YoY increase in server demandOn the other hand, Chinese CSPs are expected to be relatively weak in terms of server demand this year due to their relatively limited pace of expansion and service areas. Case in point, Alicloud is currently planning to procure the same volume of servers as it did last year, and the company will ramp up its server procurement going forward only after the Chinese government implements its new infrastructure policies. Tencent, which is the other dominant Chinese CSP, will benefit from increased commercial activities from domestic online service platforms, including JD, Meituan, and Kuaishou, and therefore experience a corresponding growth in its server colocation business; Tencents demand for servers this year is expected to increase by about 10% YoY. Baidu will primarily focus on autonomous driving projects this year. There will be a slight YoY increase in Baidus server procurement for 2021, mostly thanks to its increased demand for roadside servers used in autonomous driving applications. Finally, with regards to Bytedance, its server procurement will undergo a 10-15% YoY decrease since it will look to adopt colocation services rather than run its own servers in the overseas markets due to its shrinking presence in those markets.Looking ahead, TrendForce believes that as enterprise clients become more familiar with various cloud services and related technologies, the competition in the cloud market will no longer be confined within the traditional segments of computing, storage, and networking infrastructure. The major CSPs will pay greater attention to the emerging fields such as edge computing as well as the software-hardware integration for the related services. With the commercialization of 5G services that is taking place worldwide, the concept of cloud, edge, and device will replace the current cloud framework. This means that cloud services will not be limited to software in the future because cloud service providers may also want to offer their branded hardware in order to make their solutions more comprehensive or all-encompassing. Hence, TrendForce expects hardware to be the next battleground for CSPs.

Read the original post:
GCP, AWS to become main drivers of global server demand - evertiq.com

Read More..

Microsoft Partner Liquidware Offers an Alternative to Azure Monitor for WVD – Redmond Channel Partner

News

Microsoft only recently released its Azure Monitor product to help organizations assess their Windows Virtual Desktop (WVD) environments, but partner Liquidware is touting a solution of its own as a cheaper and better alternative.

Using Azure Monitor, which Microsoft released last month, requires organizations to dive into some potentially thorny details, such as setting up log data storage and understanding Azure Monitor's component-use structure and pricing model. The Azure-based management portal for WVD is also late on the scene, given that the WVD service, which runs from Microsoft's datacenters, was commercially released in September 2019.

Meanwhile, Chicago-based Liquidware has long offered its Stratusphere UX user experience monitoring and diagnostics solution. Stratusphere UX provides support for WVD assessment, design, monitoring and diagnostics.

Liquidware also participates as an Azure Migrate Program partner to help jumpstart organizations getting started with WVD. "Our Azure Migrate Program arrangement with Microsoft enables customers with licenses of Stratusphere UX and our entire Adaptive Workspace Management suite. The purpose of the program is to equip enterprises considering Microsoft WVD with licenses to get them started on their path to adoption of WVD," said Jason E. Smith, vice president of products at Liquidware.

I recently spoke with Smith about the state of WVD monitoring. Liquidware is a pioneer in virtual desktop infrastructure (VDI) solutions, and also counts Citrix and VMware as partners. Smith offered some perspective for organizations working with WVD or considering it. Organizations may do native WVD provisioning, or work with Citrix or VMware, but Liquidware has been around since 2009 offering VDI solutions for all sort sorts of Windows workloads, and even Linux workloads.

WVD and the Cloud MysteryIn essence, things that run on Azure are running on someone else's computer. Microsoft's WVD service runs on Azure virtual machines that are hosted by Microsoft in its datacenters. One of the basic issues for organizations wanting to use the WVD service is simply how to monitor it.

"When you enter into this new brave new world of hosting desktops in the cloud on Azure, there's a question about what is here in my environment and what's in-between, with all of the connections," Smith said. "You will want to drill down and do inventory. There are questions about what's in your datacenter and how well it's performing. Accountability is a consideration, such as service-level agreement (SLA) assurances, especially if you're doing this as a partner. And that's where the need for monitoring really is mandated. Any enterprise is going to want to know what that is."

Azure Monitor gives organizations insights into the use of Azure resources, but its use also delivers surprises.

"I can tell you what our customers are telling us," Smith explained. "They're telling us that they tried Azure Monitor, and that the storage format may not be as efficient as Stratusphere UX yet. This may cause monitoring costs to rack up really, really quickly."

Monitoring WVD ClientsSmith noted there are differences between using Azure Monitor to track Azure services and using it to track WVD clients.

"Azure Monitor was first developed to help you monitor resources in Azure cloud, especially servers," he noted. "Desktops are very different, and Microsoft is only starting to monitor them with Azure Monitor."

In contrast to using Liquidware's Stratusphere UX, Azure Monitor users will need to figure out what to monitor for their WVD instances, Smith contended.

"The other thing that's unique with Azure Monitor is that you'll need to know what you're looking for, even in the desktop," he said. "That's a big difference in comparison with Stratusphere UX, which has been doing this sort of thing since 2009. We have a turnkey virtual appliance that's based on Linux, and it can be hosted on Azure. The setup of Stratusphere UX literally takes as little as 15 minutes -- and then you roll out a few agents on to the endpoint that you want to monitor, and they start reporting back right away. Our efficient database stores historical and near-real-time data cost effectively. The storage may not be even one-tenth of the cost of Azure Monitor storage."

Stratusphere UX users get "turnkey desktop-centric reports," Smith explained.

"You simply click on them and then you can drill down deeper, and they will tell you things like the overall user experience," he said. "They will tell you about the connectivity of the desktop. They'll tell you about your Wi-Fi access points, if people are working from home, and how strong their connection is, and how far they are from the endpoint. They'll tell you about the input and output associated with a desktop, or even an individual application, that may be causing a detrimental user experience. Our solution is very efficient in the way that it stores data on Azure, and it's very cost effective."

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Follow this link:
Microsoft Partner Liquidware Offers an Alternative to Azure Monitor for WVD - Redmond Channel Partner

Read More..

Global Machine Translation Market Report 2021-26: Global Size, Share and Industry Trends The Courier – The Courier

The latest report by IMARC Group, titled Machine Translation Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2021-2026, The global machine translation market grew at a CAGR of around 14% during 2015-2020. Machine translation (MT) refers to automated translation in which computer software is used to translate a text from one natural language to another. This tool interprets and analyzes all the elements in the text by using extensive expertise in grammar, syntax, and semantics in both the source and target language. Google translate and LingoHub are some well-known machine translation engines used across the globe.

Request Free Sample Report: https://www.imarcgroup.com/machine-translation-market/requestsample

The global MT market is primarily driven by the reinvention of translational tools and the growth of adaptive machine translation. Besides this, the demand for cloud-based applications, which eliminate the need to invest in in-house hardware development or installations and provide access to different services via cloud servers, is also influencing the market growth. Moreover, several key players are launching advanced MT systems to enhance the productivity of human translators. This technology has also made it easier to disseminate healthcare information about the outbreak of coronavirus disease (COVID-19) in various regional languages. These factors are expected to provide a positive impact on the market in the coming years 2021-2026. Looking forward, IMARC Group expects the global machine translation market to exhibit strong growth during the next five years.

Breakup by Technology Type:

Breakup by Deployment Type:

Breakup by Application:

Breakup by Region:

Competitive Landscape with Key Player:

Ask Customization and Browse Full Report with TOC & List of Figure: https://www.imarcgroup.com/machine-translation-market

As the novel coronavirus (COVID-19) crisis takes over the world, we are continuously tracking the changes in the markets, as well as the industry behaviours of the consumers globally and our estimates about the latest market trends and forecasts are being done after considering the impact of this pandemic.

If you want latest primary and secondary data (2021-2026) with Cost Module, Business Strategy, Distribution Channel, etc. Click request free sample report, published report will be delivered to you in PDF format via email.

Other Report by IMARC Group:

http://www.marketwatch.com/story/3d-medical-imaging-equipment-market-report-2021-2026-global-industry-share-size-growth-trends-outlook-key-players-analysis-and-forecast-2021-03-23

https://www.marketwatch.com/press-release/allergy-immunotherapies-market-size-share-2021-industry-report-growth-rate-top-companies-trends-scope-and-forecast-by-2026-2021-03-23

https://www.marketwatch.com/press-release/semiconductor-materials-market-report-2021-2026-global-industry-trends-share-size-growth-opportunity-and-forecast-2021-03-23

https://www.marketwatch.com/press-release/calcium-stearate-market-price-trends-2020-2025-global-industry-report-size-share-growth-top-companies-and-forecast-2021-03-23

https://www.marketwatch.com/press-release/helpdesk-automation-market-report-2020-2025-global-industry-trends-growth-outlook-share-size-top-companies-and-forecast-2021-03-23

About Us

IMARC Group is a leading market research company that offers management strategy and market research worldwide. We partner with clients in all sectors and regions to identify their highest-value opportunities, address their most critical challenges, and transform their businesses.

IMARCs information products include major market, scientific, economic and technological developments for business leaders in pharmaceutical, industrial, and high technology organizations. Market forecasts and industry analysis for biotechnology, advanced materials, pharmaceuticals, food and beverage, travel and tourism, nanotechnology and novel processing methods are at the top of the companys expertise.

Contact US

IMARC Group30 N Gould St, Ste RSheridan, WY (Wyoming) 82801 USAEmail: Sales@imarcgroup.comTel No:(D) +91 120 433 0800Americas:- +1 631 791 1145 | Africa and Europe :- +44-702-409-7331 | Asia: +91-120-433-0800, +91-120-433-0800

Read more from the original source:
Global Machine Translation Market Report 2021-26: Global Size, Share and Industry Trends The Courier - The Courier

Read More..

Laying the IT Groundwork for a Crowded Space Economy – Via Satellite

Private companies from OneWeb, Boeing, and Amazon to SpaceX are busy flooding Low-Earth Orbit (LEO) with thousands of small satellites to deliver high-speed internet and other services to the most remote corners of Earth. Add to that dozens of more specialized mini-constellations that track anything from ship movements to natural catastrophes and greenhouse gas emissions. In short, we are wrapping our planet with a novel type of nervous system that can detect minute events or disturbances with a resolution down to a few meters or feet.

While these boom times thrill launch companies and hardware and software developers, most discussions around the impending traffic jam miss a crucial point. If we want to make sure the space economy takes off, we must lay a reliable terrestrial groundwork now. That means putting an IT architecture in place thats simple, safe, secure, and scalable to accomplish several objectives simultaneously.

As the industry keeps growing, hundreds of startups plus aerospace incumbents will add thousands of employees to deal with design, testing, launches, and operations, plus analyzing the rich data streams those satellites generate and which companies want to monetize. Companies need to manage a rapidly growing workforce, prevent unauthorized access and intrusions and be ready to add new services as their portfolio will almost certainly expand.

Based on my companys work with satellite clients such as GHGSat, Momentus, and High Precision Devices (now part of FormFactor), I have seen that all segments of the industry from builders and operators to the designers of sensor packages face very similar challenges.

These companies have grown quickly and had to find a way to consolidate their IT operations without slowing down the launch preparations. This is not an easy task if youre constantly adding new employees who have to be onboarded and whose rights and permissions have to be carefully managed to make sure they only work with apps and data sets theyre supposed to see or manipulate.

The reality is that most companies systems have been cobbled together over the years, with some parts on-premise and some in the cloud. Departments often add new services and servers, which eventually leads to a tangled mess. For instance, engineers have to remember multiple sign-on ID and password combinations, which wastes time and creates unnecessary tech support issues when someone is accidentally locked out.

Whats worse, the ID and password mess increases the risk of intruders gaining access for mischief, espionage, or sabotage. Most attacks still are carried out by social engineering tricks or using a human vector to get into a target system. Managing such a jumble of IT components, multiple operating systems and servers, and confusing user roles is the bane of every startup coping with plenty of other growing pains. Its even more relevant for highly sensitive and costly aerospace operations, where access to, sometimes classified, data is highly compartmentalized.

Imagine what inventive hackers could do, for instance, if they were able to tap into the satellite feeds and analytics stream around the greenhouse gas emissions of a large oil and gas company or the maintenance schedule of a satellite constellation.

Many companies have found a way to simplify their terrestrial ops by battening down the hatches. They deploy a unified system with a single sign-on across all parts of the organization and maintain a centralized local database of their users. It lets them manage the roles and permissions for every team member on their own server instead of entrusting it to a big cloud provider. In fact, even installing that ID server is usually handled by internal staff only, not outside contractors.

Going that route has several benefits. It makes onboarding of new employees and managing existing staff easier, thereby hardening the whole IT architecture. The same goes for a clear and clean audit trail, often mandatory for regulatory and government compliance. If a satellite company has everything on one system and in one dashboard in-house, theres little wiggle room when questions come up about who had access to what data or apps at what times and made what changes.

Localizing terrestrial ops has another advantage. It lets companies maintain better control over all their data, starting with the seemingly innocuous metadata. While the proprietary files themselves may be encrypted in transit and/or at rest with a cloud provider, the metadata wrappers around them, from timestamps to IP addresses or locations, rarely are and can, in fact, be sold to third parties.

Logging in at a certain location or joining a corporate Wi-Fi network can provide outsiders with valuable intelligence as to which company is negotiating the next big deal with whom. Aerospace startups are therefore well advised to check with their IT providers how they handle metadata. Again, a local, open-source option is in many cases the safer bet.

As programs for small and cubesats proliferate and the cost of launching one keeps dropping, the danger of data breaches in this industry is both real and growing. These hacks will have costly consequences long before a mishap in space garners headlines. Its high time to think about safety on the ground before you hit the launch button.

Kevin Korte is the President of Univention North America, where he is responsible for the US team and helps clients use open source identity management systems.

Original post:
Laying the IT Groundwork for a Crowded Space Economy - Via Satellite

Read More..

5 Ways Developers Can Get the Most out of Edge Computing Platforms – ITPro Today

Edge computing is one of the buzzwords du jour of the IT world. Arguably, its merely a new term for an old idea. But, either way, if youre not up to speed with edge computing concepts and priorities, nows the time to learn. Toward that end, heres a primer on what developers should know about edge computing platforms: how edge computing platforms work, how they relate to the cloud and data centers, and how to approach application development for the edge.

Edge computing is a broad term that refers to any type of application deployment architecture in which applications or data are hosted closer to users--in a geographic sense--than they would be when using a conventional cloud or data center.

The big idea behind edge computing is that by bringing workloads closer to end users you can reduce network latency and improve network reliability--both of which are key considerations in an age when applications in realms like IoT, machine learning and big data require ultra-fast data movement.

At its core, edge computing is an architectural concept, not a development concept. Applications dont need to be designed or programmed in any particular way to run on edge computing platforms.

Nonetheless, there are a number of things that developers can help their organizations get the most out of edge computing.

For applications to take full advantage of edge architectures, its important for application instances to be able to start quickly. Its hard to benefit from an ultra low-latency network when your applications take 30 or 40 seconds to start.

Thats one reason to consider containerizing applications that will be deployed on an edge platform. Containers can start and scale more quickly, enabling organizations to capitalize on the agility and speed that edge computing platforms offer.

In some cases, edge computing platforms involve hardware devices that you wouldnt find in a conventional data center. You may be dealing with IoT devices or with mobile phones that serve as a device edge (which means that the devices perform processing tasks that would traditionally be handled on the server side). Not only can the hardware profiles of these devices vary tremendously, but they also may also not offer the ability to virtualize hardware (and, by extension, standardize computing environments).

For this reason, its wise to choose a development strategy that can support any type of device or hardware configuration. Even if your edge applications run today on conventional servers, you may want to extend them in the future into more specialized devices. Sticking to programming languages, libraries and processes that help you do that will future-proof your organizations edge strategy.

In addition to the aforementioned device edge, edge computing platforms come in the form of whats known as the cloud edge. In the latter edge computing model, data processing happens in the cloud rather than on end user devices. However, the cloud data centers in a cloud edge are geographically closer to users than they would be in a conventional, highly centralized cloud architecture.

The device edge and the cloud edge both help to improve application performance and reliability, but in different ways. Developers should understand the differences and decide which type of edge model makes sense for their applications. For a device edge, theyll need to build applications that can optimize data processing directly on end-user devices. Applications in cloud edge environments look more like traditional server-side applications.

It can be tempting to view edge computing as an alternative to cloud computing, or even as the antithesis of it. In fact, edge extends the cloud rather than competes with it.

From a development perspective, this means that you can and should take full advantage of cloud services when it makes sense while building an edge application. Edge apps dont need to avoid reliance on the cloud. However, they should be capable of running in an environment where traditional cloud data centers are not available.

The fact that edge applications are deployed outside of traditional data centers also makes software testing extra important when you are developing for an edge computing platform. Not only do you need to ensure that you test each release for all of the environment configurations you will be deploying to, but you should also factor in how varying levels of network availability, proximity to content delivery networks, and even (if you are deploying to a device edge) battery life on end user devices can impact application performance.

In other words, testing edge applications requires planning for more variables and unique test cases than you would traditionally have to handle when building a standard application.

Again, developers are only one set of stakeholders in edge computing. Cloud architects, data architectures, and network and security engineers also have important roles to play in ensuring that businesses capitalize on the benefits that edge computing platforms stand to offer.

But developers can do their part by writing applications that are high-performing under any and all edge configurations that their organizations may choose to use--now or in the future.

See original here:
5 Ways Developers Can Get the Most out of Edge Computing Platforms - ITPro Today

Read More..

Engineering-related Program Changes Approved by State Higher Education Coordinating Board – ASU News

04/23/2021

JONESBORO Action today by the Arkansas Higher Education Coordinating Board clears the way for Arkansas State University to offer the Bachelor of Science in Engineering Management Systems (BSEMS) degree, as well as an undergraduate certificate of proficiency in controls and automation.

The boards action represents a huge step forward in the colleges effort to respond to the needs of students as well as employers, emphasized Dr. Abhijit Bhattacharyya, dean of the College of Engineering and Computer Science. The BSEMS, which will be offered both on-campus and online, will enable traditional as well as non-traditional students go on to positions of leadership in engineering companies.

Bhattacharyya expressed his gratitude to Dr. Alexandr Sokolov, assistant professor of engineering management, for his extensive work in developing the program, along with Dr. Brandon Kemp, professor of electrical engineering, who is serving as director of the engineering management unit.

Oftentimes engineers find themselves in management and leadership positions, when, in most cases, they have had little, or no, business or management training, noted Jim Chidester, senior mechanical engineer and project manager for Batson Inc., an engineering firm in Little Rock, as he emphasized the value of the degree program which will develop both engineering and business skills.

The need for training in controls and automation also is widely acknowledged by employers, such as Hytrol Conveyor Co., Inc.

"This type of advancement in learning is only available because of the partnerships between universities and the private sector. We are seeing time and time again that when this type of partnership happens, everyone wins," stated David Peacock, Hytrol president, adding Arkansas State offering students an undergraduate certificate in controls and automation will be a huge benefit to employers in our area.

This is a rapidly growing field and is becoming more and more desired every day, added David Williamson, president of Automation Outfitters Inc. It is a field that, due to the constant evolution of technology and application processes, can provide the challenges and continued growth that new electrical engineering and computer science students often desire once they enter the workforce. This will open many doors and broaden the opportunities for job placement for many new grads as well. As a business leader in the automation industry, I am excited for what this means. This gives us a much more opportunity to find and fill open positions without sacrificing our needs in those roles.

The undergraduate certificate most appropriate for those majoring in mechanical engineering and electrical engineering offers students a significant additional competitive edge when they enter the workforce, the dean added.

Controls and automation are ubiquitous these days some examples are conveyor systems, cruise control in cars, use of a thermostat to regulate heating and cooling in buildings or controlling the flight path of an aircraft. As for employers, it offers them access to local talent trained in theoretical and practical aspects of controls systems that they did not have before.

The certificate is the outcome of a collaboration between Dr. Shubhalaxmi Kher, electrical engineering program director, and Dr. Shivan Haran, mechanical engineering program director as well as the program faculty. The dean is grateful for their joint leadership in developing the certificate program.

The college is all about listening to its stakeholders students and employers and delivering on their needs, Bhattacharyya continued. Aside from the two very important developments today, ADHE approved the name change of the technology program to engineering technology a little more than a year ago. That important change aligned the name of the program to its content and instantly made it more recognizable to students and employers alike.

Engineering technology is a more applied discipline compared to engineering, and graduates are very involved in manufacturing operations while engineers are more involved in design and project execution.

Doug Imrie, president of Southern Cast Products, expressed his satisfaction with the name change of the program to engineering technology.

We have always actively recruited students in the engineering technology program at A-State. We find these students to be well-trained in several different technology aspects at our foundry, he said. Changing the name of this program from technology to engineering technology will not only make the program more attractive for prospective students, but it will also serve to better characterize the students in this respected field.

The name change also positions the program to complement the College of Engineering and Computer Science in such areas as quality, safety and project management, added Kevin Hart, project engineer for Anchor Packaging, Inc.

This change will shed new light on the program and the value these graduates bring to the services and manufacturing industries, he added.

Dr. Rajesh Sharma, director of the engineering technology program, emphasized the strong partnerships A-State enjoys with local industry, which supports students with internships, and graduates with full-time job opportunities.

The program name change was a response to students, alumni and employers who proposed the idea, Sharma added. This will better develop the identity of the program that is well understood by all stakeholders, including industrial and corporate entities, high schools, community colleges, and especially students and their parents.

More details about programs in the College of Engineering and Computer Science are available online.

Follow this link:

Engineering-related Program Changes Approved by State Higher Education Coordinating Board - ASU News

Read More..

The GTO Engineering Squalo Needs A Space In Your Dream Car Garage – Forbes

The Squalo by GTO Engineering is due to go into production in 2023.

Its time to free up some space in your lottery-win dream car garage, because Ferrari specialist GTO Engineering has an update to share on its new car.

Formerly known by its project name Moderna, the car is now called the Squalo Italian for shark and the British firm has revealed renders of what it will look like when production begins in 2023. As per the original sketches, the GTO Engineering Squalo has a design influenced by the Sixties Ferraris the company is famous for restoring.

As well as revealing what it will look like, GTO Engineering has confirmed the Squalo will weigh under 1,000kg and be powered by a bespoke 4.0-liter, quad-cam V12 engine with a manual transmission. Given the companys reputation for building new examples of the Columbo-designed V12 famously used by the Ferrari 250 SWB and 250 GTO, its first own-brand engine promises to be very special indeed.

Inspired by Sixties sports cars, the Squalo is claimed to weight under 1,000kg.

Speaking of the engine, GTO said this week how the V12 has undergone significant aesthetic updates and lightweight upgrades, including the removal of the carburettor surround to ensure the intake trumpets can be seen when the hood is open. More details on the engine, including its power output and weight will be revealed in May.

Reminiscent of a 250 SWB, the Squalo also features a Zagato-style double-bubble roof, as well as more modern touches like a set of bespoke 18-inch wheels. These are to be clad in a set of custom tires currently in development in conjunction with a leading manufacture, GTO Engineering says.

The company, which is headquartered in Twyford, England and has an outpost in Los Angeles, also revealed today how finer exterior details have now been signed off, including the door handles, wing mirrors and alloy wheels.

The Squalo will be powered by a bespoke quad-cam V12 engine with exposed intake trumpets.

Mark Lyon, founder and managing director of GTO Engineering, said: Theres been an outpouring of admiration for what were doing here and, we realize, a little bit of scepticism whether were actually making this: a V12-powered, sub-tonne sports car with a Sixties feel but modern reliability, enjoyment and manufacturing quality. Were here to hopefully set the record straight to say yes, its happening and were sticking to our original ethos for the car as well as timing promises for production.

Lyon added: We are also delighted to have early adopters and customer orders received already, and we thank them for the trust in our vision and business. GTO Engineering hasnt said publicly how many examples of the Squalo it plans to produce, or what the price will be.

The Squalo is the first car to be built by GTO Engineering, a UK-based Ferrari specialist.

The company currently charges from 850,000 for its Ferrari 250 SWB Revival, but unlike that car the Squalo is not a restored example of an existing car by another manufacturer. This is a car built from scratch by GTO Engineering itself, and will be badged as such.

Lyon said: Its often the small parts of a car that take the longest time. Were now at a stage where the design models are being created here in the UK and soon we will announce our technical partners working with us on the exterior manufacturing and interiors, as well as wheels and tyres...Ive never been as excited about the creativity of manufacturing and design as I am now.

The car will be built at GTOs Twyford headquarters and the first customer deliveries are scheduled for 2023.

See more here:

The GTO Engineering Squalo Needs A Space In Your Dream Car Garage - Forbes

Read More..