Page 2,220«..1020..2,2192,2202,2212,222..2,2302,240..»

Indias cryptocurrency tax kicks in from April heres what investors need to know – Business Insider India

A tax rate on par with the lottery is the tip of the iceberg though, and crypto investors will need to be aware of other provisions as well, to remain on the right side of the law in the financial year 2022-23.

India is said to have almost ten million cryptocurrency users, seeing about $100 billion in trading volume in 2021. By the calculations of the founder of WazirX, an Indian crypto exchange, that could yield $100 million (or 750 crore) additional income tax in a year.

A person who bought a crypto asset that increased in value greatly, but is yet to sell it, by definition has made no profits yet. Such crypto asset holdings where one has not realised the gains, will not qualify for tax until some portion of it is sold.

For example, if you had bought Bitcoin worth 40,000 and sold it at the same price without any profit, you would get back only 39,600. If you then invest the same 39,600 into buying Ethereum or NFTs, and again sell at no profit, you would again lose 1% to TDS and get back only 39,204. This TDS collected can be set-off against the total income tax owed at the end of the year.

This effect of making people think twice about whether a prospective trade is truly worth it, and thus clamping down on speculative trades could be intentional. As pointed out by experts, this TDS could steeply reduce the volume of crypto trade in India, once it is implemented in July 2022.

Avoiding the 30% crypto tax, and showing crypto profits as capital gains which is taxed at upto 20% plus surcharge, will not be allowed either.

As for cryptocurrency mining, the government is mulling over whether to tax the activity as a goods or service, to bring it into the fold of GST. The government also wants to make crypto trade on foreign crypto exchanges subject to GST.

Professionals and business-people will not be able to set-off gains or losses between their primary income and crypto income.

Until the current fiscal year, employees, students and senior citizens whose overall income added up to less than the minimum tax threshold ( 2.5 lakh) were tax-free. But now with a targeted crypto tax, it isnt clear if those earning less than the tax threshold will still need to pay tax on their crypto income.

For the period ending in March 2022, tax filings by crypto investors can still show business expense deductions. However, those who are liable for advance tax payment will have to move fast. The last day for paying advance tax is 15 March 2022, and delaying would add an interest of one percent of the tax owed, for each months delay.

While the tax rate on crypto stands at a flat 30% for the year 2022-23, the tax rate upon stock trading can range from zero (if filed as business income in zero tax slab) to 15% (if filed as short-term capital gain).

The proposed framework for regulating crypto is yet to be presented in Parliament, but the finance ministry is said to be working on a consultation paper, which is expected to be released for public comments in six months.

Disclaimer: This is not intended to be financial advice. We recommend making any major decisions after speaking to your tax consultant. Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions.

SEE ALSO:Adani group, IOC, BPCL, HPCL, Reliance, HDFC and other hot stocks on March 21The powerhouse behind Bored Ape Yacht Club has launched its own token, ApeCoin

Originally posted here:
Indias cryptocurrency tax kicks in from April heres what investors need to know - Business Insider India

Read More..

NK hackers stole $400m in cryptocurrency last year: report – The Korea Herald

Cryptocurrency (123rf)

The heist marked a 40 percent increase from 2020 when it stole about $300 million, according to Jonathan Levin, co-founder of Chainalysis, in a written testimony submitted to the Senate Committee on Banking, Housing and Urban affairs for a hearing on digital assets and illicit finance on Thursday in the US.

He said that the attacks targeted primarily investment firms and exchanges, deploying techniques such as phishing lures, code exploits and malware to siphon funds out from the organizations hot wallets and then move them into North Korea-controlled addresses.

Once North Korea gained custody of the funds, they began a careful laundering process to cover up and cash out, he said.

In the testimony, he noted that many of last years attacks were carried out by the Lazarus Group, a hacking group led by the Norths primary intelligence bureau, Reconnaissance General Bureau, which the US has imposed sanctions against.

The Lazarus Group, which was accused of orchestrating the notorious Sony Pictures hack in 2014 and WannaCry attack in 2017, in recent years has concentrated its efforts on digital asset crime -- a strategy that has proven immensely profitable, it added.

From 2018 on, the group has stolen and laundered massive sums of virtual currencies every year, typically in excess of $200 million, it said.

The revenue generated from these hacks goes to support North Koreas weapons of mass destruction and ballistic missile programs, the report said, citing the UN Security Council.

North Korea appears to be looking into digital money laundering to evade international sanctions on the regime, with the United Nations panel of experts monitoring sanctions on Pyongyang having said early this year that cyberattacks, particularly on cryptocurrency assets, remain an important revenue source for the regime.

The North Korean hackers targeted a diverse variety of cryptocurrencies last year, with ethereum accounting for 58 percent of the funds stolen, and bitcoin at 20 percent, whereas 22 percent were either ERC-20 tokens or altcoins, according to Chainalysis.

Levin identified that more than 65 percent of the Norths stolen funds were laundered through so-called mixers -- software tools that pool and scramble digital assets from thousands of addresses -- in an attempt to obscure the moneys origin.

By Ahn Sung-mi (sahn@heraldcorp.com)

See the original post here:
NK hackers stole $400m in cryptocurrency last year: report - The Korea Herald

Read More..

Scottish cryptocurrency donations to help people in war-torn Ukraine – The Scotsman

At least 10,000 worth of Scotcoin, Scotlands first and only digital currency, has been transferred to the country to help provide much-needed goods and services.

Funds from the Glasgow-based firm are part of more than 37.5 million of crypto gifted to the Ukrainian nation and non-governmental organisations so far.

The move follows calls from Ukraine for donations of Bitcoin, Ethereum, Polkadot and Tether currencies, amid fears the war could cause chaos for the regular banking sector, leaving people without access to cash or rendering it worthless.

Temple Melville, chief executive of Scotcoin Project Community Interest Company (CIC), says the currency is playing an incredibly important role in supporting the people of Ukraine and fulfilling the function it was initially created for.

He contends digital funds are more secure than traditional money and can be transferred in milliseconds without involving big transaction fees.

We are all deeply moved by what we have seen happening in Ukraine, he said.

It is a human tragedy on a scale that is difficult to process.

For many people, including ourselves, the most practical way to provide support is through donations, and cryptocurrency is one of the best ways of ensuring it gets to the intended recipient securely and quickly.

This is exactly the type of situation cryptocurrency was set up to help, and the vast amount of it that has been sent to the Ukrainian government and organisations on the ground demonstrates the power it has to support those in need.

He says the transparency involved in crypto transfers is an important factor in situations such as this.

Anyone can view transactions on the blockchain and check donations have been delivered to the address intended, he said.

While there have been calls to block Russian users, the way the networks are set up mean their transactions could also be easily traced too.

The Ukrainian government has indicated it will start to take more cryptocurrencies soon, opening up more opportunities to directly donate to a country that desperately needs our help.

Scotcoin aims to use its currency known as SCOT and part of the Ethereum network as a vehicle to drive meaningful change for the economy, environment and society.

Its thought to be the only purpose-driven cryptocurrency in existence, with funds being used to support social and environmental projects at home and abroad including tree-planting schemes and seaweed cultivation in Scotland.

The company is actively building a network of organisations, including start-ups, that will trade locally in the currency and support the projects mission.

Target industries include those with significant issues of over-supply or waste, where goods and services can be diverted to people in need, and in green initiatives such as carbon capture and offset.

Scotcoin made its donation through BBX UK, a network where organisations can use cryptocurrency to buy spare products and services from other companies.

Five billion SCOT have been created and there will be no more.

Two billion have been issued and three billion retained by Scotcoin in treasury of which 20 million will be released annually to deliver social and environmental aims.

A message from the Editor:

Thank you for reading this article. Were more reliant on your support than ever as the shift in consumer habits brought about by coronavirus impacts our advertisers.

View post:
Scottish cryptocurrency donations to help people in war-torn Ukraine - The Scotsman

Read More..

Vultr aims at the big clouds with new virtual machines – The Register

Cloud hosting provider Vultr has expanded its portfolio of cloud infrastructure with an new range of high-performance virtual machine instances, including its first products based on AMD Epyc processors, all with NVMe SSD storage.

Vultr, which has data centers across North America, Europe, Asia and Australia, said it was aiming to appeal to cloud customers looking for an alternative to the big platforms. This may be particularly relevant given Googles recent price changes that may see customers paying more for the same services.

The firm's compute platform now features three discrete product lines: Optimized Cloud Compute, Cloud Compute, and Bare Metal. Optimized Cloud Compute is entirely new, while the Cloud Compute lineup is refreshed with this launch.

Vultr said that the Optimized Cloud Compute instances offer dedicated vCPUs, which means that user application performance is not affected by the "noisy neighbour" problem of another user consuming all the resources.

Optimized Cloud Compute VMs start at $28 per month, and are described as elastically scalable. Cloud Compute VMs start at $5 per month, because they run on shared vCPUs. Bare Metal instances start at $120 per month, and provide dedicated physical CPUs.

According to Vultr, the addition of instances powered by 3rd Generation AMD Epyc processors marks the first time it has offered AMD-powered virtual machines. Because of their high per-core performance and maximum frequency of 3.675GHz, the new Epyc-based VMs have now become Vultr's recommended option for most users.

JJ Kardwell, CEO of Vultr's parent company Constant, said in a statement that AMD's efforts with the design, architecture, and performance of its CPUs had helped Vultr to implement an alternative to solutions like AWS EC2.

"This launch represents a big step in helping businesses and developers transition from the Big Tech clouds to our easy-to-use, affordable platform," he claimed.

Optimized Cloud Compute instances are offered in a number of tailored options; General Purpose, CPU Optimized, Memory Optimized and Storage Optimized.

According to Vultr, the General Purpose VMs provide typical balance of CPU, RAM, and NVMe SSD storage, while the CPU Optimized VMs provide proportionally more CPU, as their name suggests, for workloads like video encoding, high performance computing (HPC), and analytics processing.

In the same vein, the Memory Optimized instances are for memory bound applications, providing more RAM for applications such as databases, in-memory databases and caching. The Storage Optimized VMs provide generous proportions of NVMe SSD storage and target use cases such as large non-relational databases like MongoDB, and high frequency online transaction processing (OLTP).

Vultr said that it aims to provide users with a simple interface, plus predictable and transparent pricing, with the performance and enterprise features of bigger players. In the last year, Vultr added VPC Peering and Direct Connect features to make it a more attractive option for enterprise customers.

More here:
Vultr aims at the big clouds with new virtual machines - The Register

Read More..

8 Types of Edge Computing to Know in 2022 | Techfunnel – TechFunnel

Whenever service providers talk to their customers about edge computing, there is always one pertinent question that every customer asks what is their edge? And how does the service provider define it? The answer to this question is that edge computing can be defined based on the location where the system is deployed and the capabilities that it possesses.

Let us consider the first reference point, which is the location of these computing resources. If we consider this element, then we are talking about some of the standard edge types such as sensor edge, device edge, router edge, branch edge, local area network edge, enterprise edge, data center edge, cloud edge, and mobile edge.

However, instead of absolute distance, if we look at a relative distance, then we can look at some other parameters such as near-edge or far-edge. If the reference points are defined based on capabilities, then they will be defined as thin-edge, thick-edge, micro-edge, and intelligent-edge.

(Also Read:What is Edge Computing? All You Need to Know)

Now we will divert our attention to the different types of edge computing and some of the examples based only on the physical location of computing resources.

If we look at a typical closed-loop system, then sensors are the initial point that sends out events to the backend systems. For instance, if we look at the functioning of a video camera, then the optimal method is to send out live streams whenever there is a motion.

Motion detection and tripwire detection are some of the capabilities that can eliminate the process of sending constant and continuous traffic to the cloud server. These functionalities require edge computing at the sensor level. In most cases, edge computing utilization is very minimal in this case.

There are different types of devices that are deployed by customers to execute specific types of functions. For instance, we are talking about specific devices such as X-ray machines, vending machines, motors, and so on. Data can be collected from these devices and analyzed so that it can help in the seamless functioning of these devices. In this case, the computing resources are deployed in closer proximity to the devices so that the data processing of workloads can be done easily.

If we look at the primary function of a router it is to deliver packets between networks. They are essentially the differentiating factor between the internal networks and external systems. There are few enterprise routers that provide in-built compute modules and can be used to host applications.

A branch is defined as an office that is different from the head office and is created to perform a specific type of function. Every branch does use different types of applications depending upon their requirement the role that the said branch plays. For instance, in the retail sector, this system could be a point-of-sale application that is used at the storefront, or if we talk about a medical center, then we are talking about Electronic Medical Records.

These are business-critical applications, and they require to be hosted on an edge network at the branch level to ensure that the applications dont have any sort of latency while users are accessing these systems.

When organizations operate in a distributed environment, with multiple branches across a location, the computing resources can be used by these branches in a shared mode. This is primarily to achieve economies of scale and ease out the process of management.

In this method, instead of having edge computing devices installed at each location, it is hosted on a shared site, which is connected to the enterprise network.

As we see today more customers are shifting to the cloud network from their current data center. Due to this, we are seeing small data centers are sprouting up so that rapid deployment and data portability on specific events can be done easily in this case, the edge can be deployed closer to the customer.

Cloud service providers deliver specific services that are closer to the customer. This is to ensure that functions such as content delivery are working in an optimized manner. There is sometimes a reference of cloud edge to Content Delivery Networks (CDN), but these were not developed to host general workloads.

All the wireless service providers deliver the services in a distributed network. The service locations are relatively closer compared to cloud or datacenter edge computing. When we combine the objectives of these multi-purpose locations, then this model in itself is a very unique one and delivers some key benefits. In this edge computing model, the computing resources are deployed on service access points (also known as SAP).

These service access point locations are based at the core and the applications that operate on these edge computing servers can be accessed through various mobile endpoints by using 4G or 5G network connectivity.

Other than the location, if we consider latency as another parameter for evaluating mobile edge computing, then there are 5Cs of latency that influences mobile edge computing. These are:

While other edge computing methods may have some advantage over the mobile edge, they will only be in some specific parameters. However, when we take a holistic view then mobile edge does deliver the right balance. In most of the other models, the hardware component is located at the customer site.

This requires additional efforts to handle the space, cooling, power, and physical safety of the hardware component. On the other hand, mobile edge computing ensures that users can consume all the applications as services. This makes it easier for customers to access applications that have low latency, without any hardware deployment in the network.

To summarize, every edge computing model does have strengths and its own share of challenges. Experts usually recommend starting with the requirement of customers applications and then proceeding with evaluation and selection of the right and best edge computing model.

Edge Computing vs Cloud Computing: The Difference

How Edge Computing Is Reshaping the Future of Technology

Read the rest here:
8 Types of Edge Computing to Know in 2022 | Techfunnel - TechFunnel

Read More..

Five Cloud Startups Going After AWS’ Blind Spots – The Information

Amazon Web Services has built a commanding lead in the cloud computing market by listening to what services and features its customers want and then delivering them. But some application developers believe AWS, in its relentless pursuit of Fortune 500 customers to fuel revenue growth, has become more aligned with corporate IT departments than with the coders who initially propelled its rise starting more than 15 years ago.

This has prompted several former AWS employees, as well as those from other cloud giants like Microsoft and Google, to launch and join startups selling software that makes AWS easier to use for individual app developers. The startups provide back-end services, such as spinning up the cloud servers and databases that power websites or automating the creation of the application programming interfaces that let apps share data. Despite representing a much larger pool of corporate spending, these businesses have received less venture capital in recent years than those developing front-end tools for designing the look and feel of applications and websites.

Read more from the original source:
Five Cloud Startups Going After AWS' Blind Spots - The Information

Read More..

Could Russia plug the cloud gap with abandoned Western tech? Blocks and Files – Blocks and Files

What happens to a country when it runs out of cloud? We might just be about to find out as Russia has apparently realized itll be out of compute capacity in less than three months and is planning a grab for resources left by Western companies who have exited the country after Vladimir Putins invasion of Ukraine.

A report in Russian newspaper Kommersant says the Kremlin is preparing for a shortage of computing power, which in the coming months may lead to problems in the operation of state information systems. Initial translations of the report referred to a shortage of storage.

The Russian Ministry of Digital Transformation reportedly called in local operators earlier this month to discuss the possibility of buying up commercial capacity, scaling back gaming and streaming services, and taking control of the IT resources of companies that have announced their withdrawal from the Russian Federation.

Apparently, authorities are conducting an inventory of datacenter computing equipment that ensures the uninterrupted operation of systems critical to the authorities. The ministry told the paper it did not envisage critical shortages, but was looking at mechanisms aimed at improving efficiency.

The report cited rising public-sector demand for computing services of 20 percent. It added that one major impact is from the use of smart cities and surveillance systems. Apparently, its source explains that due to the departure of foreign cloud services, which were also used by some departments, the demand for server capacities instantly increased.

Meanwhile, the report continues, Russias datacenter operators are struggling, swept up in sanctions, economic turmoil and facing the challenge of sourcing kit when the ruble is collapsing. And they are effectively left with just one key supplier China.

Its not like Russia was awash with datacenter and cloud capacity in the first place. According to Cloudscene, there are 170 datacenters, eight network fabrics, and 267 providers in Russia, which has a population of 144 million.

Neither AWS, Google nor Azure maintain datacenters in Russia, and while there may be some question as to what services they provide to existing customers, it seems unlikely theyll be offering signups to the Russian authorities. AliBaba cloud doesnt apparently have any datacenters in Russia either.

By comparison, the UK, with 68 million citizens, has 458 data centers, 27 network fabrics, and 906 service providers, while the USs 333 million citizens enjoy 2,762 datacenters, 80 network fabrics, and 2,534 providers.

Its also debatable how much raw tin there is available in the territory. In the fourth quarter, external storage systems shipped in Russia totaled $211.5m, up 34.2 percent. Volumes slipped 12.3 percent on the third quarter, while in the fourth quarter 50,199 servers were delivered, up 4.1 percent, though total value was up 28.8 percent at $530.29m.

Server sales were dominated by Dell and HP. Storage sales were dominated by Huawei at 39.5 percent, with Russian vendor YADRO on 14.5 percent, and Dell on 11.2 percent by value, though YADRO dominated on capacity.

Now, presumably, Dell and HP kit will not be available. Neither will kit from Fujitsu, Apple, Nokia or Ericsson, and cloud services from AWS, Google or Azure.

Chinese brands might be an option, but theyll still want to be paid, and the ruble doesnt go very far these days. Chinese suppliers will have to weigh the prospect of doing business in Russia against the possibility of becoming persona non grata in far more lucrative markets like Europe, and perhaps more scarily being cut off from US-controlled components. Kommersant reported that Chinese suppliers have put deliveries on hold, in part because of sanctions.

So there are plenty of reasons for Russia to eke out its cloud compute and storage capacity. According to the Kommersant: The idea was discussed at the meeting to take control of the server capacities of companies that announced their withdrawal from the Russian market.

Could this fill the gap? One datacenter analyst told us that, in terms of feasibility, two to three months is doable as what normally holds up delivery of services is permits and government red tape, construction. If they are taking over existing datacenter space with connectivity and everything in place, they could stand up services pretty fast.

But it really depends on the nature of the infrastructure being left behind. This is not a question of annexing Western hyperscalers estates, given they are not operating there. Which presumably leaves corporate infrastructure as the most likely target.

Andrew Sinclair, head of product at UK service provider iomart, said co-opting dedicated capacity thats already within a managed service provider or cloud provider might be fairly straightforward.

Things would be far more complicated when it came to leveraging dedicated private cloud infrastructure thats been aligned to these companies that are exiting. These are well-recognized Fortune 500 businesses weve seen exiting. These businesses have really competent IT leaders. Theyre not just going to be leaving these assets in a state that people are going to be be able to pick them up and repurpose them.

From the Russian authorities point of view, they would be going out and taking those servers, and then reintegrating them into some of these larger cloud service providers more than likely. Even from a security perspective, a supply chain perspective, from Russias perspective, would that be a sensible idea? I dont know, Sinclair added.

The exiting companies would presumably have focused on making sure their data was safe, he said, which would have meant eradicating all the data and zeroing all the SAN infrastructure.

Following that, theres a question about whether they just actually brick all the devices that are left, whether they do that themselves, or whether the vendors are supporting them to release patches to brick them.

Connecting Fiber Channel storage arrays that have been left behind to a Fiber Channel network? Reasonable. But to be able to do that in two to three months, and to be able to validate that the infrastructures are free of security exploits, all the drives have been zeroed, and its all nice and safe? I think thats an extreme challenge.

But he added: When youre backed into a corner, and theres not many choices available

Of course, its unwise to discount raw ingenuity, or the persuasive powers the Kremlin can bring to bear. Its hard not to recall the story of how NASA spent a million dollars developing a pen that could write in space, while the Soviets opted to give its cosmonauts pencils. Except that this is largely a myth. The Fisher Space Pen was developed privately. And Russia used it too.

See original here:
Could Russia plug the cloud gap with abandoned Western tech? Blocks and Files - Blocks and Files

Read More..

TYAN Drives Innovation in the Data Center with 3rd Gen AMD EPYC Processors with AMD 3D V-Cache Technology – PR Newswire

"The modern data center requires a powerful foundation to balance compute, storage, memory and IO that can efficiently manage growing volumes in the digital transformation trend," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "TYAN's industry-leading server platforms powered by 3rd Gen AMD EPYC processorswith AMD 3D V-Cache technology give our customers better energy efficiency and increased performance for a current and future of highly complex workloads."

"3rd Gen AMD EPYC processors with AMD 3D V-Cache technology continue to drive a new standard for the modern data center with breakthrough performance for technical computing workloads due to 768 MB of L3 cache, enabling faster time-to-results on targeted workloads. Fully socket compatible with our 3rd Gen AMD EPYC platforms, customers can adopt these processors to transform their data center operations to achieve faster product development along with exceptional energy savings," said Ram Peddibhotla, corporate vice president, EPYC product management, AMD.

Optimized for technical computing workloads to boost performance

Leveraging breakthrough performance of 3rd Gen AMD EPYC processors with AMD 3D V-Cache technology, the TYAN Transport HX product line is built to optimize workloads like EDA, CFD, and FEA software and solutions. The Transport HX FT65T-B8030is a 4U pedestal server platform featuring a single processor, eight DDR4-3200 DIMM slots, eight 3.5-inch SATA, and two NVMe U.2 hot-swap, tool-less drive bays. The FT65T-B8030 supports four double-wide PCIe 4.0 x16 slots for professional GPUs to accelerate HPC applications.

The Transport HX TN83-B8251is a 2U dual-socket server platform with eight 3.5-inch hot-swap SATA or NVMe U.2 tool-less drive bays. The platform supports up to four double-wide GPU cards and two additional low-profile PCIe 4.0 x16 slots that provides optimized topology to improve HPC and deep learning performance.

Optimized for HPC and virtualization applications, the Transport HX TS75-B8252and Transport HX TS75A-B8252are 2U dual-socket server platforms with support for 32 DIMM slots and two double-wide, active-cooled GPU cards. The TS75-B8252 accommodates twelve hot-swap, tool-less 3.5-inch drive bays with up to four NVMe U.2 support; TS75A-B8252 accommodates 26 hot-swap, tool-less 2.5-inch drive bays with up to eight NVMe U.2 devices.

High memory footprints, multi-node servers to power big data computing

TYAN's Transport CX lineup is designed for cloud and data analytics that require large memory capacity and fast data processing. The Transport CX GC79-B8252and Transport CX GC79A-B8252are 1U dual-socket server platforms that are ideal for high-density data center deployment with a variety of memory-based computing applications. These systems feature 32 DDR4 DIMM slots, two standard PCIe Gen.4 x16 expansion slots, and one OCP 3.0 LAN mezzanine slot. The GC79-B8252 platform offers four 3.5-inch SATA drive bays and four 2.5-inch NVMe drive bays with tool-less carriers, while the GC79A-B8252 platform offers twelve 2.5-inch drive bays with all NVMe U.2 support.

The Transport CX TN73-B8037-X4Sis a 2U multi-node server platform with four front-serviced compute nodes. Each node supports one AMD EPYC 7003 Series processor with AMD 3D V-Cache technology, four 2.5-inch tool-less NVMe/SATA drive bays, eight DDR4 DIMM slots, three internal cooling fans, two standard PCIe Gen.4 x16 expansion slots, two internal NVMe M.2 and one OCP 2.0 LAN mezzanine slot. The platform is suited for high-density data center deployments and targets scale-out applications with large numbers of nodes.

Hybrid storage servers to drive outstanding performance

TYAN Transport SX lineup is designed to deliver massive I/O and memory bandwidth for storage applications. The Transport SX TS65-B8253is a 2U hybrid software storage server for various data center and enterprise deployment featuring dual-socket CPUs, 16 DDR4 DIMM slots and seven standard PCIe 4.0 slots. The platform is equipped with up to two 10GbE and two GbE onboard network connections, twelve front 3.5-inch tool-less SATA drive bays with up to four NVMe U.2 support, and two rear 2.5-inch tool-less SATA drive bays for boot drive deployment.

TYAN's Transport SX TS65-B8036and Transport SX TS65A-B8036 are 2U single-socket storage servers with support for 16 DDR4 DIMM slots, five PCIe 4.0 and one OCP 2.0 LAN mezzanine slots. The TS65-B8036 accommodates twelve front 3.5-inch with up to four NVMe U.2 support, and two rear 2.5-inch hot-swap, tool-less SATA drive bays for boot drive deployment; theTS65A-B8036offers 26 front and two rear 2.5-inch hot-swap, tool-less drive bays for high-performance data streaming applications, the 26 front drive bays can support up to 24 NVMe U.2 devices by configuration.

AMD EPYC 7003 processors with AMD 3D V-Cache technology can run on TYAN's existing AMD EPYC 7003 platforms through a BIOS update.Customers can enjoy faster time-to-results on targeted workloads powered by new AMD EPYC 7773X, 7573X, 7473X, and 7373X processors.

SOURCE MiTAC Computing - TYAN

Go here to see the original:
TYAN Drives Innovation in the Data Center with 3rd Gen AMD EPYC Processors with AMD 3D V-Cache Technology - PR Newswire

Read More..

Greg Osuri creates ripples of growth with Akash Network in cloud computing – Newsd.in

The co-founder and CEO of Akash Network Greg Osuri is determined to transform the future of cloud computing.

Isnt it incredible to learn about all those people who make sure to cross boundaries and create a unique niche for themselves in all that they choose to lay their hands on? Well, the world is filled with too many success stories, but a few rare gems like Greg Osuri strive to make a prominent difference in their respective industries with their brands and businesses. Taking over the technological world and trying to find his footing in the digital financial industry with his token $AKT, Greg Osuri, over the years has come a long way in the industry as an entrepreneur of influence in the ever-so-evolving and competitive tech world.

The kind of innovations that have happened so far in the technological and the digital world can be attributed to the rigorous efforts and astute ideas of passionate beings like Greg Osuri, who have been giving it their all to bring about a wave of great change in their respective industries. He loves to build things for people that build things. As the co-founder and CEO of Akash Network, Greg Osuri has been transforming the future of cloud computing and how.

Cloud computing is the delivery of computing services over the internet, which is the cloud, including storage, servers, software, networks, analytics, intelligence, and databases. All of this is for paving the path of faster innovation, economies of scale, and flexible resources.

Speaking more about Akash Network, Greg Osuri says that it is infrastructure that powers Web3 and a distributed peer-to-peer marketplace for cloud compute. It offers fast and simple deployment, where people can deploy their application in minutes without having to set up, configure or manage servers. He further explains that any cloud-native and containerized application can be deployed on Akash Networks decentralized cloud like decentralized projects, serverless apps, and traditional cloud-native.

His clients so far have bombarded him with enormous positive testimonials and have thanked him for contributing heavily in the general cosmos community and the whole of the blockchain industry. Greg Osuri is more than what we know him; he is also a scientist, economist, artist, and storyteller with his photography skills. Do follow him on Twitter, https://twitter.com/gregosuri.

Read more here:
Greg Osuri creates ripples of growth with Akash Network in cloud computing - Newsd.in

Read More..

Why machine identities matter (and how to use them) – Help Net Security

The migration of everything to the cloud and corresponding rise of cyberattacks, ransomware, identity theft and digital fraud make clear that secure access to computer systems is essential. When we talk about secure access, we tend to think about humans getting access to applications and infrastructure resources. But the real security blind spot is the computing infrastructure, i.e., the machines themselves.

The modern digital economy relies on a massive network of data centers with reportedly 100 million servers operating worldwide. These 100 million physical servers might represent nearly a billion virtual servers, each an entry point for hackers and state-sponsored bad actors. Additionally, depending on which analyst you listen to, the number of connected devices shows no signs of slowing down the installed base for the internet of things (IoT) was reportedly around 35 billion by the end of 2021, with 127 new devices hooking up to the internet every second. That is an incredible amount of machine-to-machine communication, even more so when you factor in the 24/7 demands of the connected society.

At the same time, denial of service (DoS) attacks and most hacking attempts are also automated. Human hackers write software exploits, but they rely on large fleets of compromised computers to deploy them.

In the dangerous world of cloud computing the machines are hacking into machines.

For these reasons alone, it is not hyperbole to say that machine identities and secure access has become a priority for both IT leaders and decision makers alike. In the 18 months since machine identity management made its debut on the Gartner 2020 IAM Hype Cycle, the trust that we need to have in the machines that we rely on for seamless communication and access has become a critical part of business optimization.

The fundamental reason for the increase of successful hacking attempts is explained by the fact that machine-to-machine access technology is not as advanced as its human-to-machine counterpart.

It is well accepted that reliance on perimeter network security, shared accounts, or static credentials such as passwords, are anti-patterns. Instead of relying on shared accounts, modern human-to-machine access is now performed using human identities via SSO. Instead of relying on network perimeter, a zero-trust approach is preferred.

These innovations have not yet made their way into the world of machine-to-machine communication. Machines continue to rely on the static credentials an equivalent of a password called the API key. Machines often rely on perimeter security as well, with microservices connecting to databases without encryption, authentication, authorization, or audit.

There is an emerging consensus that password-based authentication and authorization for humans is woefully inadequate to secure our critical digital infrastructure.

As a result, organizations are increasingly implementing passwordless solutions for their employees that rely on integration with SSO providers and leverage popular, secure, and widely available hardware-based solutions like Apple Touch ID and Face ID for access.

However, while they both outnumber humans and have the capacity to create more widespread damage due to scale and automation, machines are still frequently using outdated security methods like passwords to gain access to critical systems.

These methods include but are not limited to:

If passwords are insufficient to protect applications and infrastructure resources for humans, we need to acknowledge that they are even worse for machines. But what should we replace them with? Without fingertips or a face, Touch ID and Face ID are non-starters.

I believe the answer is short-lived, cryptographically secure certificates. Every machine and every microservice running on it must receive a certificate and use it to communicate with others.

A certificate is superior to other forms of authentication and authorization in multiple ways.

First, it contains metadata about the identity of its owner. This allows production machines to assume a different identity from the staging or testing fleet. A certificate allows for highly granular access, so the blast radius from a compromised microservice will be limited only to resources accessible to that microservice. Certificates also expire automatically, so the loss of a certificate will limit the exposure even further.

Certificates are not new. They adhere to the open standard called X.509 and are already widely used to protect you when you visit sites like this one. The little lock in the address bar of your browser is the result of a Certificate Authority confirming that the website is encrypting traffic and has a valid SSL/TLS certificate. The certificate prevents a phony website from impersonating a legit one. Lets Encrypt is the most popular way to generate these certificates for websites and is currently used by over 260 million websites worldwide.

We need to adopt certificates for all forms of machine-to-machine communications. Like Lets Encrypt, this system should be open-source so anyone can use it regardless of ability to pay. It should be trivial to request, distribute, and renew certificates that uniquely identify a machine.

If all machines have an identity, organizations can manage access to infrastructure with one passwordless system that treats people and machines the same way. This simplicity is not only more secure since complexity is the most common cause of insecurity, but it also dramatically simplifies implementation. For example, companies already have rules that prevent an intern from being able to access root on a production server. Now, they can have a rule that dictates that a CI/CD bot should not be able to login to a production database. Both users can be authenticated with the same technique (short-lived certificates), authorized using the same catalog of roles, and audited with the same logging and monitoring solutions.

The joy of being a human is increasingly mediated by machines. Maybe you are singing happy birthday via Zoom to a distant relative, or opening a college savings account for a grandchild. None of this is possible without a vast fleet of servers spread across the world. We all deserve to know that the machines making up this network have an identity, and that their identity is used to explicitly authorize and audit their actions. By moving machine identity out of the shadows, the world will be a safer place.

Read the original here:
Why machine identities matter (and how to use them) - Help Net Security

Read More..