Page 3,847«..1020..3,8463,8473,8483,849..3,8603,870..»

Save $484 on Dell’s PowerEdge small business server with this promo code – ZDNet

Dell is holding a sale where you can save up to 50% off select PowerEdge servers for small businesses. There are quite a few discounted models available, but we spotted one deal in particular that works with acode, bringing the price down another $300.

Disclosure: ZDNet may earn an affiliate commission from some of the products featured on this page. ZDNet and the author were not compensated for this independent review.

The Dell PowerEdge T40 tower server can be the "building block" for your small service, according to Dell, as it can handle common workloads such as file consolidation, storage, and sharing. It comes with a 3.5-inch chassis with up to three hard drives. It also features an Intel Xeon processor, 8GB of memory, and 1TB of HDD storage.

This small business server is normally $833, but it's been reduced as part of Dell's PowerEdge server sale, and if you apply the promo code PD349SERVER at checkout, you can cut an extra $300 from the price, bringing the total to $349. That's a $484 savings.

Dell's PowerEdge T40 is a well-rated, dependable, on-site tower server that you won't regret investing in to support your small business. You can use it to locally manage your files and expenses while also avoiding the hassle of cloud processing and storage costs. It even comes with Dell EMC support, as well as a one-year hardware warranty.

Original post:
Save $484 on Dell's PowerEdge small business server with this promo code - ZDNet

Read More..

Evolution of Infrastructure as a Service – App Developer Magazine

Infrastructure-as-a-Service (IaaS) has changed the world. In the past, a company had to invest time and resources into building and operating its own servers. So much effort went into maintaining infrastructure, that little time remained for focusing on the company mission. IaaS changed all that. Today, a provider like AWS will build the server farm. A software company will use those servers to build applications, and their customers will leverage both of those tools to develop their own products.

IaaS has changed the world because the best thing for the market is equilibrium. Equilibrium can only be achieved when every company specializes in what they do best. Cloud infrastructure enables a focus on core competencies. When organizations are able to focus on what they do best, they can deliver focused, better products, which in turn better serve end customers needs.

Dudai: The aaS acronym refers to a cloud solution for business that is provided as-a-service. Infrastructure-as-a-service (IaaS) more specifically refers to self-service, pay-per-use storage space, networking equipment, and services. It is highly scalable, automated, self-provisioned and gives users far more granular control over their environments.

IaaS has evolved entire industries because it enables companies to deliver a more focused, better product to customers. It allows companies to build specialized technology stacks that help them do what they do best, ultimately differentiating them from the competition.

Dudai: Some would say that IaaS, SaaS, and PaaS are part of a family tree. SaaS is one of the more widely known as-a-service models where cloud vendors host the business applications and then deliver to customers online. It enables customers to take advantage of the service without maintaining the infrastructure required to run software on-premises. In the SaaS model, customers pay for a specific number of licenses and the vendor manages the behind-the-scenes work.

The PaaS model is more focused on application developers and providing them with a space to develop, run, and manage applications. PaaS models do not require developers to build additional networks, servers or storage as a starting point to developing their applications.

Dudai: When first introduced, IaaS initially provided mostly compute (virtual servers such as ec2) and object storage (such as s3). Today it has matured to provide a much richer set of managed services from databases, to containers, to serverless functions, to message queues and more. This has enabled companies to leverage more and more capabilities in the cloud and spend more time differentiating their product.

Dudai: IaaS is now enabling more disruption across all markets and industries as the same capabilities available to larger companies are now also available to the smallest startup in a garage. This includes advances in AI and Machine Learning (as a service), data analytics, serverless technologies, IoT and much more. This is also requiring large companies to behave as agile as a startup.

Dudai:IaaS enables businesses to deliver a more well-honed, better product to its customers. IaaS is like a microeconomy, where supply equals demand, creating an equilibrium. With IaaS, everyone in the market is specialized and can do their best work. IaaS enables companies to build the best app that focuses on solving problems for their customers rather than spending resources spinning up and managing their own infrastructure.

Dudai: This balance opens the door for broader, deeper, richer and more robust products and services, with faster turnaround time and faster time to market.

Dudai: Without it, companies run the risk of falling behind and losing business to their more agile competition.

Sagi Dudai is Chief Technology Officer. In this role, he is responsible for leading Vonages technology vision, architecture, and design, overseeing all aspects of technology development, including new products, their enabling technologies, and R&D. Prior to being named CTO, Mr. Dudai was Senior Vice President of Software Engineering for Vonage, responsible for software development company-wide.

Mr. Dudai has more than 20 years of experience at the forefront of the fastest moving technology industry trends, including cloud, mobile, machine learning and artificial intelligence (AI). Prior to joining Vonage in 2012, he held engineering leadership roles at various technology companies, including three startups Mercury Interactive, TelMap & fring which were later acquired by Hewlett-Packard, Intel and GenBand, respectively. Earlier, he also worked on a number of classified software and hardware engineering projects for Israeli intelligence.

He graduated from the prestigious IDF computer science training program and earned his B.A. in Computer Science & Business Administration from Tel Aviv University. He also holds an Executive M.B.A from Northwestern Universitys Kellogg School of Management.

Go here to see the original:
Evolution of Infrastructure as a Service - App Developer Magazine

Read More..

Turn government paperwork into efficient workflows heres the proof – ITBusiness.ca

For the people they serve, governments want accountability and transparency. The realities of bureaucracy mean decisions need input, review and signoffs from multiple stakeholders. Thats understandable, but the process consumes forests.

There are complex approval chains in government that are totally locked in on paper, says Jared Kolb, director of partnerships at Proof, and head of the firms Toronto office.

Why the continued reliance on paper processes? A lot of inertia is built up in the system, says Kolb.

Thats changing. As governments are embracing innovation, theyre looking towards service-as-a-software organizations to re-imagine their business operations. Proof (proofgov.com) is filling the urgent need to help governments go paperless.

Proofs technology is designed for the approval process of the government. The companys intuitive web interface allows public servants and government officials to build and customize routings specific to their workflows.

Users attach documents on secure cloud servers and can share, search and retrieve information across multiple locations and repositories. Proofs workflow management software provides automated audit trails, which include approval sequences, document revisions and timestamps.

Approvals can be signed at the click of a button. With Proof, e-signatures are legally binding and unique to each user.

Proof digitizes citizen-facing and internal forms to ensure information is collected and stored seamlessly. Requests from submitted forms can be assigned to team members, with the status of the forms quickly tracked.

Kolb estimates that administrators using Proof save an average of one hour a day. For a 100-FTE department, Proof calculates a savings of $2.2 million in staff time. When you multiply this out, there are serious savings in time and ultimately dollars, says Kolb.

Proof aims to increase efficiency in government. Users can more closely monitor and prioritize approvals. Proof provides real-time analytics on workflows in all departments. Dashboards reveal the volume and flow of teams, which helps leaders to make decisions and improve processes.

The data can be unlocked, says Kolb. Once we digitize these processes, we begin to understand where the bottlenecks are.

Proof was co-founded by CEO Ben Sanders (based in Victoria), Chief Operating Officer Luke Decoste (based in Halifax), and Chief Technology Officer Wes George (based in Whitehorse). Between them, they have experience in software and engineering, leading startups and working with various government bodies.

Proof did its first pilot with the Yukon government in 2018 and is now working with governments at all levels, from the City of Winnipeg to Service Nova Scotia, to Transport Canada.

Kolb is excited to lead the Proof effort in Toronto. For one, theres the market potential: Theres an open posture towards our approach; we share a mind with many of the governments across Ontario.

Beyond that interest, Kolb says, Toronto is Canadas tech hub, and theres an immense amount of talent in the Toronto startup community.

Proof was part of the 2019 class of theTechstars TorontoAccelerator, whose goal is to help entrepreneurs succeed through access to mentorship, talent and tech support.

That culminated in a successful fundraising round, which enabled us to expand our team, says Kolb.

As Proof grows, it continues to help governments keep an eye on processes, to see where things are moving through the system or falling behind. Our goal, says Kolb, is to make government better.

Go here to see the original:
Turn government paperwork into efficient workflows heres the proof - ITBusiness.ca

Read More..

Locking Down the Kernel and Securing the Container – Container Journal

Containers have taken the place of virtual machines as the go-to technology if multiple programs are required to run in isolation of one another on a single server. Virtual machines made processes such as cloud computing and web hosting possible. With virtual machines, the operating system and program code are stored together and any single operating system is running on the assumption that it is doing so on its own server. In reality, it is sharing a single server with several other operating code-program packages. This is made possible by the host operating system. If this sounds confusing, think of it this way: Several virtual machines are running independently of one another on the same server.

While this approach solved many problems in computing, it also has some major issuesnamely, the processing overhead required to run numerous emulated servers. Most laptops today are not capable of running multiple virtual machines efficiently.

This is where container technology has stepped in. Container tech has become extremely popular over the last decade with large corporations such as Amazon and Google jumping on board, says Chael Anderson, a tech writer at Australia2write and Nextcoursework.

To mitigate the heavy computer costs often associated with virtual machines, containers have only the application and the necessary namespaces. This program interacts directly with the host operating system. To make it clear, the only operating system on the server is the host server interacting with the containers. To use containers, something called a kernel is required.

A kernel acts as a middle man between the operating system and a container. In fact, many containers may share a single kernel. The kernel limits what programs can access what data. For example, it may not be desirable that program A has full access to the data and information stored within program B, even though they will need to share information at some point. The operating system, on the other hand, has full access to all data in kernel mode (when all memory is accessible).

Any compromise to kernel security can lead to major issues in regards to sensitive data. Although container technology has made computing less costly than the previous virtual machine approach, it has created a new type of security concerns. writes Josh Playfair, a web developer at Britstudent and Writemyx.

Next, we will look at some things to keep in mind when locking down a kernel to secure the container.

Failure to lock down a kernel can result in a wide variety of issues, the most damaging being a malicious actor gaining access to parts of a container they were not meant to. Here are some best practices for preventing this.

Keep the kernel updated: The kernel should be updated to the newest version as soon as the host is created. The issue is not usually with the kernel itself, but rather with containers. It is not uncommon for containers to have vulnerabilities. Although these vulnerabilities are usually resolved quickly, failure to update to the latest version will still leave the container exposed to these issues. Checking if one has the latest kernel installed is simple; it requires running the code shown below:

Use only SSH authentication and remove root user: The purpose of doing this is simple: If a container escapes isolation we do not want to special privileges of the root user exposed to the host.

Furthermore, SSH authentication passwords are, by design, insecure and best disabled. To disable SSH authentication passwords execute the following code:

Use container security tools: Many quality container tools perform scans and alert the user if there is a security issue with a container. Docker is one of the most popular programs that offer this service.

The three tips listed above are some of the most simple yet effective ways to lock down kernels and ensure container security. That being said, no container is ever fully secure or safe from attack. It is important to always monitor. It is also recommended that those concerned with container security look into host security as well. These two topics go hand in hand concerning information security.

Related

Read this article:
Locking Down the Kernel and Securing the Container - Container Journal

Read More..

Cloud Hosting Service Market Insights with Statistics and Growth Prediction 2020 to 2026 – Instant Tech News

The Cloud Hosting Service Market report is a compilation of first-hand information, qualitative and quantitative assessment by industry analysts, inputs from industry experts and industry participants across the value chain. The report provides in-depth analysis of parent market trends, macro-economic indicators and governing factors along with market attractiveness as per segments. The report also maps the qualitative impact of various market factors on market segments and geographies.

Get Sample Copy of this Report:

https://www.marketinsightsreports.com/reports/08061383468/global-cloud-hosting-service-market-size-status-and-forecast-2019-2025/inquiry?source=instanttechnews&Mode=11

Cloud hosting is where your site is stored on multiple servers, which lets you pull resources from a variety of different places. This makes cloud hosting a very scalable, reliable, and flexible type of hosting, perfect for sites that experience hikes and dips in things like traffic. Note that there are different types of cloud hosting. Traditional web hosts, such as DreamHost and HostGator, offer cloud hosting packages that are priced similarly as their other web hosting packages (typically in the Shared or VPS range). These small business-friendly cloud hosting solutions are what were primarily focused on in this roundup.

Top LeadingCompaniesof Global Cloud Hosting Service Market areA2 Hosting, SiteGround, InMotion, HostGator, DreamHost, 1_1 IONOS, Cloudways, Bytemark Cloud, Hostwinds, Liquid Web Hosting, AccuWeb, SiteGround, FatCow, BlueHostand others.

Regional Outlook of Cloud Hosting Service Market report includes the following geographic areas such as: North America, Europe, China, Japan, Southeast Asia, India and ROW.

On The Basis Of Product, The Cloud Hosting Service Market Is Primarily Split Into

Linux Servers CloudWindows Servers Cloud

On The Basis Of End Users/Application, This Report Covers

Commercial OperationGovernment DepartmentOthers

This allows understanding of the market and benefits from any lucrative opportunities that are available. Researchers have offered a comprehensive study of the existing market scenario while concentrating on the new business objectives. There is a detailed analysis of the change in customer requirements, customer preferences, and the vendor landscape of the overall market.

Browse Full Report at:

https://www.marketinsightsreports.com/reports/08061383468/global-cloud-hosting-service-market-size-status-and-forecast-2019-2025?source=instanttechnews&Mode=11

Following are major Table of Content of Cloud Hosting Service Industry:

Furthermore, this study will help our clients solve the following issues:

Cyclical dynamics We foresee dynamics of industries by using core analytical and unconventional market research approaches. Our clients use insights provided by us to maneuver themselves through market uncertainties and interferences.

Identifying key cannibalizes Strong substitute of a product or service is the most important threat. Our clients can identify key cannibalizes of a market, by procuring our research. This helps them in aligning their new product development/launch strategies in advance.

Spotting emerging trends The report help clients to spot upcoming hot market trends. We also track possible impact and disruptions which a market would witness by a particular emerging trend. Our proactive analysis help clients to have early mover advantage.

Interrelated opportunities This report will allow clients to make decisions based on data, thereby increasing the chances that the strategies will perform better if not best in real world.

We Offer Customization on report based on specific client Requirement:

Free country Level analysis for any 5 countries of your choice. Free Competitive analysis of any 5 key market players. Free 40 analyst hours to cover any other data point.

About Us:

MarketInsightsReportsprovides syndicated market research on industry verticals includingHealthcare, Information and Communication Technology (ICT), Technology and Media, Chemicals, Materials, Energy, Heavy Industry, etc.MarketInsightsReportsprovides global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations.

Contact Us:

Irfan Tamboli (Head of Sales) Market Insights Reports

Phone: + 1704 266 3234 | +91-750-707-8687

[emailprotected]|[emailprotected]

Read more from the original source:
Cloud Hosting Service Market Insights with Statistics and Growth Prediction 2020 to 2026 - Instant Tech News

Read More..

Will VMware’s New Fees Trigger Rush to the Cloud? – Toolbox

With chip makers packing more processing power into CPUs, a leading maker of virtualization software is overhauling its prices significantly to reflect the development. Will VMwares new fees push enterprise users into the cloud?

Its a reckoning that IT executives will be forced to consider come April. Thats when the companys new fee structure kicks in for hypervisor products that run multiple operating systems on a single piece of silicon. Some multinationals could face significant price increases under the new structure.

The Dell EMC subsidiary changed license fees for its Vsphere hypervisor kit. Instead of charging per CPU socket for motherboard connections, it will now base its fee on the number of cores in the CPU.

It will require one license for up to 32 cores. If a CPU has more than 32 cores, VMware will require a second license. And thats on top of the fees that users must fork over for the software that operates their physical and virtual machines.

The price increase plays into the hands of cloud vendors as chips gain in performance. They can now sell large corporations on the cost attractiveness of the clouds scalability and on the concurrent reductions in their IT maintenance and upgrade expenses.

That flexibility underpins the market share of the company. VMwares hypervisors are running in three-quarters of data center servers, according to industry analysts, with license fees accounting for around 40% of its $9 billion annual income in fiscal 2019.

Market leader Intel is touting a pair of chip lines called Ice Lake and Cooper Lake that contain 38 and 48 cores, respectively. Meanwhile, AMDs EPYC microprocessor family, launched last year, runs out 64 cores in its Rome CPU.

Away from the x86 instructions set architectures that those chips contain, a start-up called Ampere Computing is working on an 80-core processor. That chip is intended for cloud platform operators and built on an ISA licensed by Arm. The British chip designer also is pushing for its own piece of the data center market with a line of 32-core Cortex chips.

Huawei launched a 64-core chip based on an Arm design last year. And Amazon Web Services is using a customized Arm ISA called Graviton2 in its data centers.

Nevertheless, that hasnt quelled criticism about the change. It ranges from jabs at the language used to justify it to whether VMware is pushing customers to a preferred chipmaker, given that most Intel chips are below the 32-core threshold.

The larger issue is whether VMware provides the impetus for digital transformation. AsDel EMC offers cloud-hosting of computing, storage and networking services, the impact among competing vendors that run VMware in their data centers could reshape that market, too.

Excerpt from:
Will VMware's New Fees Trigger Rush to the Cloud? - Toolbox

Read More..

Infoblox Core DDI and Cloud Platform appliance products are now certified as Nutanix Ready – Help Net Security

Infoblox, the market leader in next-level networking and DDI services, and Nutanix announced that Infoblox Core DDI and Cloud Platform appliance products, which are part of the Nutanix Elevate Program, have been certified as Nutanix Ready.

Infoblox will support its customers with the integration of Nutanix and Infoblox NIOS DNS, DHCP, IPAM (DDI) solutions, including NIOS virtual appliances running on Nutanix AHV and Nutanix Calm support for orchestrated DNS/IPAM workflow. NIOS is now the only DDI solution that runs on and supports automated workload orchestration on Nutanix Enterprise Cloud.

The NIOS integration with Nutanix will automate the steps of IP address allocation and DNS updates during spin up and spin down of virtual machines, addressing problems caused by lengthy, manual workload provisioning and will further simplify infrastructure management by running completely in the Nutanix environment.

Were thrilled to announce this new integration with Nutanix, said Dave Signori, Senior Director, Product Management. It enables our customers to realize the benefits of automated DDI network services deployment and workload orchestration, along with the simplicity and security that hyperconverged architecture brings.

As IT processes are becoming increasingly automated, we are committed to empowering IT professionals with the tools they need to run their networks with more efficiency, security and reliability, said Prasad Athawale, Sr Director, Strategic Alliances and Partnerships at Nutanix.

We are looking forward to partnering with Infoblox to provide automated workload orchestration on our platform.

According to the IDC, DDI platforms are essential components to building a modern datacenter that relies heavily on automation and programmability. Cloud computing platforms have created a new paradigm for hosting and accessing applications, and they have also driven IT organizations to modernize their internal datacenter operations to provide a cloudlike agility on their own premises.

Infoblox sees continued, steady growth in the DDI market as enterprises continue to automate DDI and the number of IP addresses in enterprise networks continues to rise. With this partnership, Infoblox can continue to deliver a safe and secure next-level networking experience to these enterprises.

See original here:
Infoblox Core DDI and Cloud Platform appliance products are now certified as Nutanix Ready - Help Net Security

Read More..

HMRC chief gives thumbs up to five-year cloud migration programme – PublicTechnology

Credit: HMRC

HM Revenue and Customs chief executive Jim Harra has concluded that the departments five-year programme to migrate from physical datacentres to the cloud is value for money and deliverable.

The tax agencys boss(pictured above)has provided an assessment summary on the HMRCs Securing Our Technical Programme. These assessments are mandatory for any new or altered programmes of work in the governments major projects portfolio.

Securing Our Technical Future is, according to Harra, a five-year programme to secure the technical future of HMRCs IT services by removing technical debt, reducing reliance on non-government owned datacentres, and migrating eligible services to cloud.

The scheme will see the department move large volumes of services and data from its existing datacentre environment which is largely comprised of Fujitsu infrastructure to Crown Hosting Data Centres co-location facilities or public cloud from Microsoft and Amazon Web Services.

The programme will update HMRCs current ageing, and increasingly out of support, IT estate, Harra said. The existing estate, comprised of more than 600 services, lacks agility and is costly to run. Changes delivered by the programme will allow HMRC to generate more cost-effective arrangements with suppliers, while ensuring a more resilient and flexible infrastructure.

As part of major project assessments, accounting officers are required to review the programme in questions regularity, propriety, value for money, and feasibility.

Harras report which he said was somewhat based on the findings of his predecessor as CEO, Sir Jon Thompson, who conducted a full review in June 2018 endorsed the projects credentials on all fronts.

Of the cloud-migration schemes value for money, he said: [It] offers the highest potential to meet critical success factors and minimise IT delivery risks for HMRC, providing best value for money.

He indicated that the plan had been compared with three other options, one of which was to do nothing.

He acknowledged there are significant delivery challenges to successfully managing a complicated datacentre migration and transformation involving numerous stakeholders and delivery partners.

But he claimed work so far is on track, and on target for timely delivery.

The programme has already delivered several milestones and has successfully crossed several assurance and approvals hurdles, including an independent assurance review and HM Treasury approval of the business case, he said. Further independent assurance reviews and HMT approvals are planned at appropriate points throughout its lifetime.

Harra added: The programme leadership, supported by delivery partners, has the skills and experience needed to ensure the technical feasibility of the project deliverables and achieve the major milestones in the timeframe.

Delivering the aims of Securing Our Technical Future which was formerly known as the Columbus Cloud programme has been an ambition of HMRC for several years.

However, the urgency of the departments Brexit-related work imperilled the scheme; as of mid-2018, delivery of the migration programme was being reconsidered. It was one of more than 100 transformation projects that were halted, paused, or merged as part of a review and reprioritisation exercise.

But the programme is now seemingly back on track, and the department has previously set a target of completing migration by June 2022. However, this date was specified in a contract published last year that described the project as three-year programme two years shorter than the timeframe indicated by Harra in his assessment summary.

The HMRC chief executive said he would provide an updated assessment if any of the factors related to the schemes successful delivery change materially during the lifetime of the programme.

Read the original:
HMRC chief gives thumbs up to five-year cloud migration programme - PublicTechnology

Read More..

How can government manage the growing digital market choice? – The Mandarin

As government transitions to digital first solutions in engaging with and providing services to its citizens, it needs to reach everywhere, connect everyone and integrate everything. To achieve this, government needs to be at the digital edge to connect people, locations, cloud infrastructure and data fast and securely.

The public sector, much like the private sector, has quickly realised that digital engagement with their counterparts and consumers is largely unavoidable and that doing so effectively requires a departure from the past, Don Wiggins, Senior Global Solutions Architect with Equinix, explained.

Sophisticated applications combined with exponential data collection, he said, is helping to drive the growth in demand of real-time analytics supporting internal and external government decision making. The speed and scale required to deliver digital government services needs physical adjacency to clouds, networks and service partners.

But the traditional IT architecture that exists in government agencies can be decades old or more and these legacy systems can be a barrier in providing the responsive and secure services in the digital age.

A stove-piped isolation approach is no longer sustainable, Wiggins said.

New IT architecture and platforms need to support a digital-first government require global location coverage, private interconnection, and the capability to integrate, standardise and simplify control. Cloud services, API-based platforms and other external services are increasingly becoming the go-to solutions for government to fill the capability and infrastructure void that exists.

But as demand increases, the market for options and services is expanding with more flexible options that can be optimised over time to keep pace with rapid change.

The core business of an agency still needs to be at the centre of their decision in going digital, and it needs to be responsive. Service providers need to provide fast, highly scalable and pay-as-you-go solutions that integrate new functionality as it is required.

And to get the most out of these services, there needs to be a solution in between to help government manage services providers, re-architect applications, and enable digital intelligence.

Data centres and whole-of-government hosting strategies have been part of the Australian government landscape since 2010. But the adoption of cloud solutions has been slow, with the government transition not keeping pace with public demand and the market. New hosting strategies promoted by the Digital Transformation Agency are supporting greater choice in service providers and this is helping to support a rapidly expanding digital government landscape.

Equinix is a provider of digital enabling solutions. From its history in providing telecom peering exchange and colocation solutions, it now services a growing need of optimising global interconnection. Through Platform Equinix, government clients have access to a cloud ecosystem with direct access to Amazon Web Services, Microsoft Azure, Oracle Cloud as well as government-hosted clouds centres.

The approach provides choice and control for the digital future of their clients and concerns of security, performance and vendor lock-in become a thing of the past.

Interconnection is the focus of these enabling platforms. The digital audience can access multiple clouds and consume services as needed, from where needed. And agencies can scale seamlessly and provision new services as demand changes with pay-as-you-grow models making solutions affordable.

This was the case with Yarra City Council who have embarked on a cloud first journey.

Over 100 legacy systems existed within the local council, and in 2014 the cost in time and money of managing these became too much. A digital transformation strategy was established to update their IT architecture and enable a better customer experience.

Yarra City Council deployed their new architecture inside a Melbourne-based data centre that provided security and performance guarantees. But it also enabled them a bridge to interconnect with business partners and service providers, as well as migrate their staff to Office 365 and roll out a range of other software applications including Oracles cloud-based customer request system.

The hybrid cloud strategy they have implemented has enabled the council to rapidly respond to needs. It can now roll out services almost overnight. Previously it took eight months. And this responsive IT architecture has helped them to future proof their systems and will ensure they remain responsive to the changing digital demands from both internal and external stakeholders.

Read this article:
How can government manage the growing digital market choice? - The Mandarin

Read More..

The long read: 20 years in ITS – Highways Magazine

Iain McDonald, ITS business manager at Colas Ltd, says: The biggest issues over the last 20 years has been the move from the old dial-up PSTN broadband and landlines to wireless mesh and on to 5G.

Communication [in highways] has become a two-way process that has allowed the industry and authorities to start helping drivers by influencing their journeys, the directions they take, the speeds they drive. I expect that to only increase and we could have multi-directional communication in the age of 5G and the internet of things.

This connectivity explosion has changed the focus of ITS and intelligent highway systems, says Matthew Vincent, marketing director for intelligent traffic solutions at Siemens Mobility. The availability and adoption of much-improved communications infrastructure and cloud platforms, together with new machine learning techniques, have enabled new software services and business models to be considered, he says.

The predominantly driver-centric mindset that has been prevalent for many years, where outcomes were focused on, for example, better journey times, is changing. A broader consideration for all road users is now much more common, with outcomes increasingly based on improved safety for all road users and of course reduced vehicle emissions.

Siemens Mobility has brought about its own innovations in ITS and is at the forefront of how the sector tackles the relatively new issue of air quality concerns. Mr Vincent says: There have been a few big developments for Siemens. The most significant has been the move to cloud hosting, opening up new ways of delivering ITS services to both our customers and to road users. This is core to our portfolio across our ITS and enforcement solutions in particular.

Our technology sits at the heart of Low Emission and Clean Air Zones and is helping deliver real benefits in terms of cleaner air and improved driving conditions, and we are also excited to begin a new era of distributed traffic control with Plus+ [which uses distributed intelligence with simple power and data cabling].

The process by which new innovations come to market has been another dramatic change for the sector in recent years.

As of 22 April 2016 when the Traffic Signs Regulations and General Directions 2016 came into force, the statutory type approval system previously required under Direction 56 ceased. Type approval had been provided on behalf of the secretary of state by the former Highways Agency under the auspices of the Department for Transport (DfT).

When this was removed, companies had to self-certify their products under the governance of EU and UK standards.

Mr McDonald says: This opened up the industry somewhat and meant that some technology such as radio temporary traffic signals could come in a lot quicker. However, it meant that companies had to buy in the skills to self-certify products.

The removal of type approval led to the creation of TOPAS (Traffic Open Products & Specifications), which was set up to co-ordinate the management and development of technical specifications for traffic control and related equipment. TOPAS offers a straightforward means for customers to verify manufacturers compliance with the specifications through a new product registration system.

TOPAS has been endorsed by the DfT, the Association for Road Traffic Safety and Management and council directors body ADEPT, which provided the initial funding.

It is comprised of four delegates from industry, four from local government through ADEPT and four from the Governments of England, Scotland, Wales and Northern Ireland. TOPAS is a limited company but effectively operates like a highly technical voluntary organisation that charges a fee for its certification of manufacturers self-certification.

Colas is currently going through the TOPAS process with its work in Lincolnshire, where it is bringing the Colas M@estro traffic signal controller system over from France. This is the first time it will be used in the UK and the company has spent three years to make sure it is compatible with the UKs systems, which gives you some idea of how complex this sector is.

We are currently in the process of presenting the papers to TOPAS to show them that we have met the self-certification standards, Mr McDonald reveals.

ITS expert Dr Mark Pleydell is director of Pleydell Technology Consulting Ltd and a member of the TOPAS Management Board.

He suggests the biggest developments in ITS over the last 20 years have been mobile comms, single board computers and Linux and adds that a major focus for the industry has been accommodating developments without disrupting the existing and prevailing systems.

Dr Pleydell argues that the industry is still grappling to know what to do with the advance of mobile phone data over the last generation but I think that is largely because we dont necessarily understand the problems well enough to define the questions.

Looking at some of the barriers faced by ITS, especially around its integration with other sectors, he says: The rapid development of advanced Driver Assistance Systems, Cooperative Intelligent Transport Systems, and Connected and Autonomous Vehicles may lead to a disconnect between the existing and prevailing established systems of ITS and traffic control.

Under the Traffic Management Act, traffic managers have a duty to manage and improve the movement of people and goods within their purview. That allocation of responsibility and accountability supported the delivery of focused and directed work, balancing the new with the existing to good effect. There seems to be a disconnect now between global mobility solutions and local objectives.

The professional bodies and trade organisations in the ITS and Traffic Control sectors have recognised the need for clarity and are joining together to share concerns and discuss problems with the aim of asking for advice or working with government and users. Activities like these may not yield obvious deliverables but they do inform a sense of direction and purpose for us.

Highlighting some best practice, he cites the Traffic Management Cell at Dartford, a Highways England project managed by Connect Plus. This project brought together a diverse team and implemented a unique solution for protecting tunnel infrastructure by detection and removal of a sub-set of vehicles from the huge traffic flows and during its creation required (at least) two completely new products to be developed and deployed.

The integration of civil engineering, town planners and ITS is an issue, according to Mr McDonald. Sometimes ITS can be an afterthought. ITS is the package that sits on top. This is probably not the best way to do things but it is hard to change the culture.

One of the best ways to influence this is perhaps through an alliance system, which could integrate civil engineering and ITS at an earlier stage. Colas was part of Midland Metro Alliance and provided signalling for the light rail system.

Mr Vincent says that ITS systems have tended to operate within their own ITS eco-system, making good use of appropriate data from the roadside, but were not necessarily designed with data inter-operability in mind.

Aside from a willingness to share data, there needs to be the appropriate mechanisms (and business cases, financial or otherwise) to make it happen. Fortunately, the advances in technology and communications can only help with integration, particularly as the benefits of shared data become clearer and the use cases requiring it to become more prevalent.

John Nightingale is a director at JCT Consultancy Ltd, which runs the JCT symposium, a key event in the ITS calendar since 1996.

Looking ahead to the future, Mr Nightingale says: What if the road surface and its constituent materials were in itself Smart? If it were possible to install a smart mesh in the highway during construction, it could present significant opportunities for monitoring, control, maintenance and even power.

So what would a smart mesh be? Well, it would probably consist of a collection of materials with power and data capabilities and piezoelectric, thermocouple, hygroscopic and inductive properties. It would be flexible, come on a roll and would be installed as a component (or layer) during construction or resurfacing.

By being a mesh it could be truly continuous and be an embedded component of the highway structure and barring complete severance it may be robust enough to suffer damage without loss of service.

This type of technology has been looked into by Shell, Highways understands, so may not be as futuristic as one might think.

These types of ambitious ideas are great fuel for an ITS sector that has come a long way in the last 20 years and has every reason to be ambitious about the next generation.

Read more:
The long read: 20 years in ITS - Highways Magazine

Read More..