Page 4,270«..1020..4,2694,2704,2714,272..4,2804,290..»

IMF Urges Banks to Invest In Cryptocurrencies – Investopedia

A June 2017 staff discussion note from the International Monetary Fund (IMF) suggests that banks should consider investing in cryptocurrencies more seriously than they have in the past. According to the IMF staff team responsible for the note, including prominent economists such as Dong He, Ross Leckow, and Vikram Haksar, "rapid advances in digital technology are transforming the financial services landscape." These members of the IMF feel that such transformations generate new opportunities for consumers as well as service providers and regulators. The ultimate message of the report seems to be one of support for cryptocurrencies, as it outlines some of the ways that the fintech industry might be able to provide solutions for consumers related to trust, security, financial services, and privacy in this area.

One of the key findings of the IMF report is that "boundaries are blurring." This means that the borders between intermediaries, service providers, and markets, previously well-defined, have become blurry with the advent of new technology related to digital currencies and cross-border payments. Along with the blurring of these boundaries, the authors of the report suggest that "barriers to entry are changing." This does not, however, mean that barriers to entry are universally being lowered. Rather, they are being lowered in some situations but raised for others, particularly "if the emergence of large closed networks reduces opportunities for competition."

Absolutely key in the view of the authors of this report is that "trust remains essential." With less reliance on traditional intermediaries, consumers are turning more toward new networks and providers. The facilitation of this transfer on a large scale requires significant levels of trust in security, privacy, and efficiency. Along with this, and perhaps contributing to a new sense of trust, is the authors' conclusion that "technologies may improve cross-border payments" by serving better and more cost-efficient services, by lowering compliance costs, and by working to fight against terrorism financing.

In the view of the IMF authors, the financial services sector is poised to make the change toward cryptocurrency involvement. That being said, the report suggests that "policymaking will need to be nimble, experimental, and cooperative" in order to successfully navigate this crossing. Simultaneously, regulatory authorities will have a careful job to do: they must balance efficiency concerns and stability tradeoffs. In order to be willing to enter into this world, regulatory authorities will likely need reassurance that risks including cyberattacks, money-laundering, and terrorism support can be mitigated without harming the innovative progress of the digital currency world. To do this, the authors believe that regulators might need to increase their attention on activities and that governance will need to be strengthened. If all of these things take place, the IMF authors believe that banks could integrate cryptocurrencies successfully.

See the original post here:
IMF Urges Banks to Invest In Cryptocurrencies - Investopedia

Read More..

Cryptocurrency Market: Is There a Price Drop around the Corner? – newsBTC

The cryptocurrency market may register a momentary overall price drop, led by Bitcoin and Ether as it undergoes a correction before recovering. Read more...

The cryptocurrency price rally may soon hit a temporary hurdle, leading to a slight fall, before picking up the pace again. The forecast was made by Fred Wilson, a cryptocurrency investor, and partner at Union Square Ventures. Wilsons prediction closely follows a general price drop pattern exhibited by most of the cryptocurrencies in the past 24 hours.

Wilsons statement in his recent blog post was quoted by one of the financial news outlets. He said,

My gut says we are headed for a selloff in the crypto sector.

However, given the volatile nature of cryptocurrencies and a variety of influencing factors driving its price, Wilson takes a step back to indirectly state that even though there are all the indications of a selloff, it still might not happen as expected. It is followed by encouraging words about the long-term future of the cryptocurrencies. Wilson wrote,

But of course, I could be wrong about that. I am wrong a lot. But honestly, I dont really care. I will keep buying into this correction or rally, whatever it turns out to be. Because the more important question is where these assets will be in five or ten years. And I have a lot more conviction about that one.

Ethereum has emerged as a strong cryptocurrency in the recent weeks after its price surged from under the $100 mark to cross $300 in no time. As it continued to trade strongly on various exchanges, the unexpected flash crash of ether on Coinbases GDAX exchange, where the price momentarily fell to $0.10 ended up causing a significant disruption to the ongoing trading trend. Coinbase has been since then working on compensating the traders for their losses.

In the same blog piece, Wilson also offers some investment advice to the millennials looking to invest in cryptocurrencies. He advises people to invest in small amounts at regular frequency and to not maintain all investments in cryptocurrencies. By spreading funds over, they minimize the risks associated with cryptocurrency volatility and any other developments that might impact the value of digital currencies.

Read the original here:
Cryptocurrency Market: Is There a Price Drop around the Corner? - newsBTC

Read More..

China experimenting with its own cryptocurrency – Neowin

The Peoples Bank of China is reportedly testing its own cryptocurrency, prompting speculation that China could be the first country in the world to run its own national cryptocurrency. According to reports, tests of the prototype digital currency have begun with mock transactions being made with the countrys commercial banks.

The bank has not made any official statement on the development of a national cryptocurrency and therefore no timetable indicating a launch date is available. Since May, both the State of Palestine and Russia have indicated that they want their own national currencies. In the case of the State of Palestine, it would solve two significant problems, firstly they dont have money-printing facilities to make physical money, and secondly, they wouldnt have to import the money and get it past the Israelis.

In Chinas case, the money could be used by millions of Chinese who dont have easy access to conventional bank services due to a lack of infrastructure. Additionally, charges on cross-border payments could be significantly less than those incurred when using physical currency.

The introduction of a national cryptocurrency would also be a boon for the ruling Communist Party of China (CPC). Firstly, Xi Jinpings agenda of clamping down on corruption would be helped along as transactions would be more traceable, secondly, real-time economic insights could be extrapolated from the data. This would be very important to the CPC which could use the data when laying out its Five-Year Plans and making other laws.

Source: CGTN | Image via CFP Photo

Visit link:
China experimenting with its own cryptocurrency - Neowin

Read More..

IBM Simplifies Object Storage for Cisco Customers – PR Newswire (press release)

LAS VEGAS, June 26, 2017 /PRNewswire/ --Today at CiscoLive, Cisco's annual IT and communications conference, IBM (NYSE: IBM) announced that companies using Cisco UCS servers can now manage their data intensive workloads securely and efficiently on-premises with the IBM Cloud Object Storage (COS) System, now available as a VersaStack Solution for Cloud Object Storage. This pre-validated, tested and supported solution is designed to offer modern, flexible storage for unstructured data for use cases such as active archive, backup, content repository, enterprise collaboration and cloud application development.

Businesses rely on a variety of storage architectures, including file and block storage for traditional workloads and performance-centric applications. Today's organizations are increasingly adding object storage for its massive scalability, cost efficiency and ability to manage rich, user-definable metadata.

IBM and Cisco have collaborated to simplify and speed object storage adoption for customers with the VersaStack Solution for Cloud Object Storage. The combination of the Cisco UCS S3260 Storage Server, C220 Rack Servers and Cisco Nexus 9K switches with IBM Cloud Object Storage System is ideal for data intensive workloads, supporting IT organizations with an easily scalable solution.

Customers can use their same on-premises Cisco hardware and server management tools to add IBM Cloud Object Storage to their current IT environments, allowing them to manage their data from petabyte to exabyte scale with reliability, security, availability and disaster recovery all without replication. This new offering will modernize storage access, so that when clients are ready to extend their workloads into a hybrid cloud storage environment, the processing and tools are already in place for them to use their IBM Cloud Object Storage software, in either a private or hybrid cloud environment.

"The growth of unstructured data in the enterprise is driving the need for a highly scalable, cost effective storage architecture," said Satinder Sethi, vice president, data center solutions, Cisco. "Cisco UCS S-Series Storage Servers are built to deliver rapid scalability and performance coupled with maximum investment protection through multi-generational system design. Cisco is collaborating with IBM to extend VersaStack with a new cloud object storage solution, that is built on UCS S3260 storage server, 40G fabric and IBM Cloud Object Storage that can scale from terabytes to petabytes in minutes."

IBM Cloud Object Storage redefines the availability, security and economics of data storage, requiring less storage, power, floor space, personnel and cost than traditional storage options. It complements IBM's high and medium-performance flash and disk storage options, available for Cisco customers via IBM VersaStack, by providing a different kind of storage environment one that is built for cloud native applications; unstructured data like video, audio, image and documents; and archive and other data protection needs.

IBM takes a "software-defined/hardware-aware" approach to object storage. IBM Cloud Object Storage provides information about the status of both the logical and physical elements of the system in one view. This includes statistics on the health of disk drives, fans, NICs and the temperature of major system components. Intelligence delivered in the IBM COS Manager makes it easy for a single administrator to manage 10s of petabytes of storage, which can significantly lower the total cost of ownership for large scale object storage systems.

"At IBM, we want to make it easy for organizations to adapt their IT environments when business needs change," said Phil Buckellew, general manager, IBM Cloud Object Storage. "Providing IBM Cloud Object Storage for Cisco hardware customers does just that it allows them to use their existing investments to gain massive scalability for large volumes of data or changing business needs, with the option to extend into the IBM Cloud if and when it makes sense for them."

The VersaStack Solution for Cloud Object Storage is backed by a Cisco Validated Design (CVD), which provides guidance on design and deployment of the solution, enabling Cisco customers and channel resellers to repeat the successful approaches they have taken with other Cisco validated solutions and seamlessly add object storage to their IT tool kit. It also simplifies the integration process for Cisco customers, who can now use their existing Cisco purchasing agreements and support structure to implement a flexible IBM Cloud Object Storage environment.

The VersaStack Solution for Cloud Object Storage is available now, joining a growing family of VersaStack solutions jointly developed by IBM and Cisco. VersaStack solutions now have 19 validations (combined CVDs and Redbooks), enabling IBM and Cisco's joint customers to address a wide range of workloads and use cases.

About IBM Cloud Object Storage: For more information, please visit http://ibm.biz/cloud_object_storage.

All product and company names herein may be trademarks of their registered owners.

Contact Betsy Rizzo IBM Media Relations betsy.rizzo@us.ibm.com

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/ibm-simplifies-object-storage-for-cisco-customers-300479431.html

SOURCE IBM

http://www.ibm.com

Read more:
IBM Simplifies Object Storage for Cisco Customers - PR Newswire (press release)

Read More..

How IaaS cuts time for app deployment and maintenance costs while improving innovation – Cloud Tech

More than half of respondents in a survey from Oracle say moving to infrastructure as a service (IaaS) has significantly cut their time to deploy new applications and services, while three in five claim it is easier to innovate through it.

The study, conducted alongside Longitude Research and which surveyed more than 1,600 IT professionals across nine countries, also found IaaS had significantly cut ongoing maintenance costs for a majority (54%) of those polled.

Naturally, the more organisations are using IaaS, the more confident they are of its success. 56% of experienced users agreed with the statement that IaaS provides world-class availability and uptime, compared with 49% of established users, 45% of recent adopters, and only 29% of non-adopters. For the statement IaaS provides world-class speed, it was similar, with 52% of experienced users and 25% of non-adopters respectively.

When it came to more negative perceptions surrounding IaaS, the UK came out on top. 57% of respondents grumbled that IaaS was not secure enough for most critical data, compared to only 39% in Germany, while 55% and 43% respectively were concerned over losing control of on-premises systems.

When it comes to cloud adoption, there has always been a case of perception lagging behind reality, said James Stanbridge, vice president of IaaS product management at Oracle. Cloud is still relatively new to a lot of businesses and some outdated perceptions persist.

We are now seeing high levels of success and satisfaction from businesses that are saving money, cutting complexity and driving exciting innovation thanks to cloud infrastructure, Stanbridge added. Those resisting the move need to challenge the perceptions holding them back because the longer they wait, the further ahead their competitors will pull.

The push for Oracle towards IaaS will not be a huge surprise given the company has said it is an important focus for them. Speaking to analysts following the impressive $1.36 billion cloud quarter results last week, Larry Ellison said that during the current fiscal year, the company expects both its IaaS and PaaS (platform as a service) businesses to accelerate into hyper growth. SaaS revenue hit $964 million in the most recent quarter, compared to PaaS and IaaS with $397m.

You can read the report here (UK-centric).

Link:
How IaaS cuts time for app deployment and maintenance costs while improving innovation - Cloud Tech

Read More..

How safe is iMessage in the cloud? – Macworld

Examining privacy and security in the world of Apple

Of all the problems iMessage has, Apple says it plans to solve a persistent one: having access to all your conversations on every device, instead of messages and data lying scattered across all the Macs, iPhones, and iPads you use. But is this the right problem to solve?

Apples Craig Federighi explained at the 2017 Worldwide Developers Conference that iMessage will be stored in iCloud with end-to-end encryption, but provided no other details. Later, he mentioned that Siri training will sync across iCloudinstead of being siloed on each of your Apple devices, and that training and marking faces in Photos People album will do the sameand with end-to-end encryption.

Despite that encryption promise, this concerns me. Its better to have the least amount of personal and private information pass through other systems, instead of directly between two devices. Its especially good to have the least amount of private data stored elsewhere, except if the encryption for that data is firmly under your control or fully independently vetted.

That storage issue is particularly problematic with iMessage. While Apples design for at-rest storage could be terrific, iMessage itself is way behind its competition in providing an effective, modern encryption model. Notably, if a party sniffs and records encrypted iMessage data from a privileged position and a later flaw allows the recovery of an encryption key, all previously encrypted data can be unlocked. The way to prevent that is using forward secrecy, which Signals OpenWhisper protocol employs in the Signal app and in WhatsApp.

Craig Federighi explains how Siri training syncs among devices using end-to-end encryption.

While Ive queried Apple for more details on how all this will work, its likely they wont provide any until closer to the OS updates or even afterwards. If youre installing developer or public betas, you should consider how this might affect you without having all the details to hand.

Apple designed its iCloud Keychain sync in an admirable way. It uses a zero knowledge approach, which is the gold standard for hands-off data transfer and storage. With a cloud-storage system like Dropbox or how Apple hands email, contacts, calendars, photos, and other iCloud data, all information has an encryption overlay while in transit and another form of encryption at rest on the cloud servers.

However, that at-rest encryption lies under the control of the company offering the service. It possesses all the keys needed to lock your data on arrival and unlock it to transmit it back. Thus, its susceptible to internal misuse, hacking, legitimate government warrants, and extralegal government intrusion.

With iCloud Keychain and other similar syncingsuch as that used by 1Password and LastPass, which I discussed in a recent columna secret gets generated by software running only on client devices and that secret is stored only there. The company that runs the sync or storage service never has possession. Data is encrypted by the mobile or desktop OS and transmitted.

When multiple devices need access to the same pool of data, systems typically use device keys to encrypt a well-protected encryption key that in turn protects the data. (This is the approach used as far back as PGP in the 1990s.) That way, theres a process to enroll and remove devices from the pool of legitimate ones that can access the actual data encryption key.

I fully expect this is what Apple is using: an expansion of iCloud Keychain to more kinds of data. iCloud Keychain has a sometimes funky enrollment process that, when it hiccups, can leave users adrift. I receive email every several weeks from those who have iOS iCloud Keychain errors that they cant fix or permanently dismiss, even by un-enrolling and re-enrolling in that iCloud option.

But its the right way to do, when you consider the intensely personal information in text messages, Siri training data, and Photos facial-recognition and -tagging. Imagine someone gaining full access to all that in a form they could decode? (Were not sure yet either whether that encrypted information will be created in such a way that its not useful without source data on devices, of course.)

Its reasonable to worry about centrally stored and synced data, because it represents such a weak point in data protection. Given that Apple is stepping up the kind of data you can sync and store, it should also be upgrading its under-the-hood encryption techniques and disclosing more information about how it works. And it should submit its work to external independent auditing and provide more transparency to allow outsiders to monitor for government or third-party intrusion.

All of this can be done without compromising security; all of it would, in fact, dramatically improve the integrity of your data from outside examination. Apples stance on keeping our information unavailable to it is admirable. But it needs to give more assurances that nobody else could possibly access it either.

Read more:
How safe is iMessage in the cloud? - Macworld

Read More..

Overspending in the cloud: Lessons learned – ZDNet

One of the reasons virtualization (the precursor to cloud computing) gained popularity in the early 2000s is that companies had too many servers running at low utilization. The prevailing wisdom was that every box needed a backup and under-utilization was better than maxing out compute capacity and risking overload.

The vast amounts of energy and money wasted on maintaining all this hardware finally led businesses to datacenter consolidation via virtual machines, and those virtual machines began migrating off-premises to various clouds.

The problem is, old habits die hard. And the same kinds of server sprawl that plagued physical datacenters 15 years ago are now appearing in cloud deployments, too.

According to a recent survey from RightScale, 35 percent of cloud spending is wasted via VM instances that are over-provisioned and not optimized. The report found that most enterprises run their virtual instances 24/7, many VMs are running at less than 40 percent of CPU and memory capacity, and old backup snapshots and other unattached data repositories are clogging up cloud storage resources.

It turns out that the ease and elasticity of the cloud are a double-edged sword. When spinning up new instances is effortless, who has the discipline to keep track of and sunset resources when they're no longer needed?

Expensive lesson

This was one of the lessons learned at Ecolab, a global provider of water, hygiene, and energy technologies and services. Ecolab works with large-scale facilities around the world, monitoring and managing water systems using a vast network of sensors and probes. A team of some 60 developers works on mining this data for performance insights and trendspotting.

Craig Senese, Director of Analytics and Development at Ecolab, says the transition from on-premises datacenter to cloud was critical, as physical resources were reaching their limits. Ecolab was already using Microsoft technologies to manage its infrastructure and analytics, so the Microsoft Azure Cloud was a logical fit.

Once Azure was deployed, however, developers began to leverage resources without focusing on optimization and cost-efficiency.

"I think the biggest lesson that we've learned to this point is that it's a different model," Senese said. "You've gone from having our own servers, having our own datacenter working through IT to get the resources you need, to basically carte blanche for our developers where they can add and remove resources as needed. The lesson learned there has been that we really need to make sure that everyone is educated on our plan as an architecture, our plan as a resource model, because it's very easy to spend. We need to make sure that we control that and we're not spinning up resources uncontrollably.

"We have a large team, and making sure everyone is on the same page with the strategy of how we want to deploy in the cloud is important."

Being new to cloud computing, Senese and his team weren't sure where and how tweaks could be made to optimize Ecolab's cloud usage and efficiency. Fortunately, Microsoft reps helped assess the environment and workloads, then build out a plan.

"We started by working with Microsoft to see where we could optimize, and they were great in helping us understand where we could optimize our spend," Senese said. "We do a lot of compute. We do a lot of data analytics, and we wanted to see whether we can optimize spending, because we were new to this space."

Once the team found out more about Microsoft's strategies and created a resource model that could support existing workloads and scale as needed, they were able to spread the word among other areas of the business.

To find out more about Ecolab's setup and how Microsft's experts can help you guide the discussion forward, please visit zdnet.com/Microsoft-cloud.

Excerpt from:
Overspending in the cloud: Lessons learned - ZDNet

Read More..

Telefnica launches data centre in Lima, Peru – The Stack

Telefnica, the global broadband and telecom provider, has opened Phase I of a new cloud data center in central Lima, Peru. Rather than build a new facility, the company chose to repurpose a central office in an effort to reduce capital expenditures and reuse assets, while reducing construction timelines.

The new cloud data center, located in the Lince district of Lima, will contain 6,000 square meters of floor space, built in three phases. The final product will have 584 cabinets with 2.6 megawatts of total IT power. 100 cabinets are currently available with the completion of Phase I of the project.

The new facility was built to conform to the Uptime Tier III standard, with 99.98% availability and online maintenance of equipment.

Telefnica selected Huawei to provide infrastructure integration services, helping to design and construct the new data center inside the previous central office building.

Huawei managed construction personnel and subcontractors, delivering HVAC, power, and cabling along with five additional systems. Huawei was also responsible for optimizing the construction process, delivering Phase I of the project in just five months.

The completion of the Lima data center helps to cement Telefnicas presence as a premier cloud provider in the Peruvian market. The company has noted interest from financial, transportation, and municipal enterprises who are drawn to the project in part due to its Tier III certified high reliability.

Telefnica has been expanding its presence in the Central and South American data center market, having recently opened facilities in Santiago, Chile, Sao Paulo, Brazil, and Mexico City. The Lima data center will provide OpenCloud and Cloud Server services to Peruvian customers.

Recently, Telefnicaannounced a new cloud solution for customers in Europe and the Americas. Known as VDC, or Virtual Data Center 3.0, the new solution is targeted at assisting medium to large-scale enterprises in moving workloads to the cloud. The VDC 3.0 solution provides customers with virtualization technology from VMware, delivered on Huawei servers in Telefnica data centers. VDC 3.0 is currently available for customers in the U.S., Spain, and the UK, as well as throughout Central and South America.

Telefnica also announced plans to launch Cloud Foundation, a new corporate cloud solution aimed at providing a fully hybrid environment for enterprise customers. Cloud Foundation is expected to be available by the end of 2017.

Go here to read the rest:
Telefnica launches data centre in Lima, Peru - The Stack

Read More..

Cloud security: The castle vs open-ended city model – Cloud Pro

With cloud security, the boundary for the system stops being the edge of your physical network but the individuals who use it.

When you see major breaches of either cloud services or corporate networks, its not usually the external boundaries of the organisationthat have been compromised, its more often the identity of an individual.

The Verizon Data Breach Investigations Report 2017 proves that that security is continually having to change in order to keep up with fluctuations in the threat landscape. With 81% of hacking-related breaches leveraging either stolen or weak passwords, its no wonder that identity is a new focal point.

Changing boundaries

How are the boundaries changing for organisations in terms of security? In the last ten years, security boundaries have changed so much so that they have become invisible or, at the very least, barely recognisable. In its redefined state, security now starts with identity, authentication, and account security.

Adoption of cloud-based services is partly to blame, according to Richard Walters, CTO CensorNet, as unstructured data now resides in cloud-storage applications.

Work is no longer a place. Its an activity, he says. Users have an expectation of instant, 24/7 access to apps and data regardless of location, using whichever device is convenient and close to hand. Just when we thought wed got a handle on things, along came millions of IoT devices that connect to cloud servers. The identity of things is becoming as important as the Identity of human beings.

IT's shift beyond the physical boundaries of a company means the goalposts have moved, with security focusing on protecting applications, data and identity instead of simply guarding entrances and exits to the network.

This radically changes the role of the traditional firewalls, says Wieland Alge, EMEA general manager at Barracuda Networks.

For a while, experts predicted that dedicated firewalls would eventually be absorbed by network equipment and become a feature of a router. Since we build infrastructures bottom-up now, everything starts with users and their access to applications, regardless where they are physically; the firewalls not only need to be user and application-aware, but also to show the same agility and deployment flexibility as the respective entities they protect.

The castle vs the open city

Is security in the modern digital world like an open city, as opposed to traditional corporate computing, which is more like a castle?

A castles spiral stairs turn clockwise to give an advantage to right-handed sword-wielding defenders. According to Memsets head of security, Thomas Owen, that kind of subtlety and defence in depth (plus the motte and bailey, moat, keep, etc.) are where the state of the cyber-security art now lies.

The increase in adoption of identity federation or outsourced/crowdsourced Security-as-a-Service capabilities, such asTenable.ioor HackerOne, speak of democratisation and an increase in trust of third parties, but if youre lazy on patching or have flabby access control in place youre still going to get hacked, he says.

Open cities still have rings of trust, policers/enforcers, strictly private spaces, laws, etc.Weve not been in a place where a single castle wall is sufficient for decades.

Nigel Hawthorn, chief European spokesperson atSkyhigh Networks, says that another issue with the castle-based cybersecurity approach is that there are a lot of keys to secure.

Each employee who has access to networks is a potential threat. They could begin acting maliciously or have their details stolen by cybercriminals who then have keys to the kingdom. With the number of credential thefts ever-increasing, no company that utilises a castle approach is truly safe, he says.

Stopping hackers acquiring identities

Hawthorn says that businesses must become better at detecting when an employees credentials have been hijacked.

He says the issue is that many still rely on a single authentication process, with access being granted on the basis of having a company email address and password. For example, the heist on the Central Bank of Bangladesh, in which $81 million was stolen, took place after hackers gained the SWIFT log-in credentials of a few employees. Had the bank had more stringent identity checks the attack may have been mitigated.

The best approach is behavioural analytics, which works in a similar way to how credit card companies detect and prevent fraud, according to Barry Shteiman, director of Threat Research at Exabeam.

It creates a baseline of normal activity for each individual person, then compares each new activity against the baseline. In the same way that Visa would block a UK-based consumer from buying a TV in Beijing for the first time, corporations will detect hackers trying to use valid but stolen credentials.

He says that with one customer, a national retailer, suddenly saw an employee in the HR department attempt to access 1,500 point-of-sale systems in the retail stores.

Shed never done it before. In fact, no one in her department had done so before. It turns out that she was on holiday and her corporate credentials had been stolen and were being used by a hacker to steal credit card info. The password was valid, so the question wasnt 'can she access this system' but instead, 'should she be accessing this system?', says Shteiman.

Evolving security models

Over the next few years, security models will need to be updated to include cloud-based monitoring and controls, says Jeremy Rasmussen, director of cybersecurity atAbacode.

Typically, there is a shared security responsibility for systems hosted in the cloud. The cloud service provider is responsible for security of the underlying infrastructure. However, protecting anything stored on that infrastructure - from the operating system up to applications is the responsibility of the individual organisation, he says.

Hawthorn says that as the cloud and applications continue to become more vital to operations, businesses must begin to view them as an extension of the firm.

Data controls need to be enforced at the cloud application level, as opposed to stopping at the business network perimeter. Companies and their cloud third parties are being forced into a shared responsibility model due to GDPR, so there will be a greater focus on protecting data wherever it is in its journey.

Original post:
Cloud security: The castle vs open-ended city model - Cloud Pro

Read More..

PitchBook moves to a microservices infrastructure scaling the business through scalable tech – Network World

PitchBook is a data company. Its reason for being is to provide a platform that tracks a plethora of different aspects of both private and public markets. Want to know about whats happening in venture capital, private equity or M&A? Chances are PitchBook can give you the answer. The company is a subsidiary of Morningstar and has offices in Seattle, New York, and London.

But heres the thing, though. PitchBook was founded in 2007 when cloud computing was pretty much just beginning and there was no real awareness of what it meant. In those days, enterprise IT agility meant leveraging virtualization to gain efficiencies. Now dont get me wrong, moving from a paradigm of racking and stacking physical servers to being able to spin up virtual servers at will is a big deal, its just that since 2007, there has been massive further innovation in the infrastructure space.

So if youre PitchBook, built in the early days of the cloud in a monolithic way, and you want to scale to your stated business ambition of hosting data about 10 million companies, what do you do? Well, one thing you can do is to rethink your entire infrastructure footprint to take advantage of modern approaches. And this is what PitchBook has done, moving from a monolithic infrastructure to microservices, which should enable PitchBook developers to easily scale the platform.

Breaking from a monolithic environment will allow us to easily make changes under the hood of different modules without affecting any of the other services tied to it. This ultimately is pushing the PitchBook Platform into a new era, defined by greater scale and usability, said Alex Legault, lead product manager at PitchBook. With an aggressive product roadmap that involves loading massive datasets, leveraging modern cloud techniques and enabling more machine learning, a microservices infrastructure will provide the right framework to execute on our plans, quickly and efficiently.

The PitchBook journey piqued my interest and so I sat down (in the modern sense of the word where sit down means get email answers to questions) with Legault to learn more about this journey. Without further ado, heres the PitchBook story.

What tech are you using? K8S? Docker? Mesos? Serverless?

We made a lot of moves to new tech with our front end in this release: React, ES2016 (EcmaScript 2016 - version of Javascript). Spring too. Were currently evaluating Docker and K8S.

Why did you make the decision to migrate to microservices?

Our clients need to move fast and require timely access to data and new datasets. To meet these needs, we require an architecture that will allow our product team to run fast and scale. Microservices provides this. At PitchBook, were at a critical inflection point where were growing at a rapid pace, and the platform needs to keep up, both from a data perspective as well as from a feature set and scalability standpoint. While a monolithic infrastructure could have met our needs, as our platform gets bigger and more complex, it would get increasingly challenging to make changes or updates. With microservices, each service becomes its own module allowing our developers to easily make changes without impacting other services.

+ MORE ON NETWORK WORLD What you need to know about microservices +

In some instances, microservices can lend itself to an explosion of modules/services that need to be managed within an enterprise. Did you think about that going into this migration and what sorts of management technology have you implemented to avoid the chaos that some companies are facing?

Moving to microservices naturally creates the problem of module explosion. There are few recipes to avoid or minimize this:

1) Mini-services versus nano-services approach. We tried not to be too idealistic and not design microservices as nano-services. Getting too small and too specific with the services can quickly introduce a headache. For us, it made sense to start with bigger modules, which we call mini-services first, then adapt and split down further when necessary. Each team can control this process and split things only when it serves a real purpose or advantage to do so.

2) Unify the service interface and infrastructure, use containerization and orchestration. Our ideal end state is a fully programmable and automated infrastructure (IAC), which requires a formalized DevOps function. Cant state enough how important having good DevOps folks is in making this transition successful.

What will this switch allow you to do? Whats next in the road map where microservices will play a huge role?

There are several benefits microservices provides us, including:

It will allow us to speed up delivery of new features, innovations and data sets. Our goal is to eventually host 10 million private and public companies within the platform and microservices will help us get there faster and with scale.

We can also more easily adopt different technology where needed and arent bound to the same databases or languages in any part of the application.

Redeployment will become easier. While the system is more fragmented, it's less fragile so when individual services are down, it doesnt bring down the entire system.

Allows us to scale individual services that are the bottleneck, its not just one big instance anymore. This helps us with scaling as our datasets grow.

On the horizon, we have several initiatives related to high-speed data visualization and analysis. We have such great datasets, so how can we generate and surface more insights to customers. Microservices will play a huge role in enabling this.

How will your customers benefit from the switch?

Were all about serving our customers which is why we made this move. Institutional investors are under more pressure than ever before to make intelligent investment decisions and generate higher returns, making access to quality data absolutely essential. New technology that can help us recommend, analyze and surface personalized insights to customers is hitting the jackpot - were confident microservices can unshackle us so we can go after these initiatives. Customers can expect to start seeing more releases, more innovation and a platform that can handle much larger scale while staying fast.

Technology is a progression mainframes to physical x86 to virtualization. Microservices is but the latest move in this process and we can already see things on the horizon (event-driven infrastructure, for example) that will take organizations like PitchBook to the next level. It is interesting to have a glimpse inside and explore the thinking that goes into a significant platform shift.

Continue reading here:
PitchBook moves to a microservices infrastructure scaling the business through scalable tech - Network World

Read More..