Page 413«..1020..412413414415..420430..»

Five Johns Hopkins scientists named Sloan Research Fellows – The Hub at Johns Hopkins

By Aleyna Rentz

Five Johns Hopkins faculty members have been named 2024 Sloan Research Fellows, a prestigious award celebrating rising stars in academia. In all, 126 early-career scholars were recognized this year.

Awarded annually since 1955 by the Alfred P. Sloan Foundation, the fellowship honors exceptional U.S. and Canadian researchers whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Open to scholars in seven fieldschemistry, computer science, Earth system science, economics, mathematics, neuroscience, and physicsthe Sloan Research Fellowships are awarded in close coordination with the scientific community. To date, fellows have gone on to win 57 Nobel Prizes and 71 National Medals of Science.

Image caption: The 2024 Sloan Research Fellows from Johns Hopkins are (clockwise from top left) Stephen Fried, Benjamin Grimmer, Justus Kebschull, Jonathan Lynch, and Yahui Zhang

Candidates must be nominated by their fellow scientists and winners are selected by independent panels of senior scholars based on a candidate's research accomplishments, creativity, and potential to become a leader in their field. More than 1,000 researchers are nominated each year. Winners receive a two-year, $75,000 fellowship which can be used flexibly to advance the fellow's research.

Including this year's winners, 87 faculty from Johns Hopkins University have received a Sloan Research Fellowship.

The five newest Sloan recipients from Johns Hopkins University are:

Assistant professor, departments of Chemistry and Biophysics

Artificial intelligence is surprisingly good at folding proteins into their correct 3-D structures, and yet proteins themselves are surprisingly not good at this taskoftentimes they are prone to "misfold" or stick together, which causes them to lose their functions and make cells sick. These processes are at the root of most neurodegenerative diseases such as Alzheimer's and may also be a factor in aging more broadly. Stephen Fried's research pioneered the use of mass spectrometry proteomics to interrogate protein folding on the scale of entire proteomes. These studies have provided an array of insights on questions as diverse as the molecular basis of aging, the origins of life, and the function of disorder in the yeast proteome.

Assistant professor, Department of Applied Mathematics and Statistics

Benjamin Grimmer has recently become fascinated with computer-aided optimization of the algorithms used to solve big real-world problems. A new wave of results in his field (optimization) has made computers provably good at this. Many of our now strongest algorithmic guarantees have only been made possible thanks to computer-assistance. Grimmer's research also recently had breakthrough results, covered by Quanta Magazine, showing that a new computer-aided analysis approach can beat the well-established textbook theory for gradient descent.

Assistant professor, Department of Biomedical Engineering

Justus Kebschull's research aims to understand the structure and function of the brain. To do so, he takes a comparative approach and engineers molecular, viral, and sequencing technologies to measure neuronal connectivity networks and gene expression at scale in disease models and a wide range of vertebrates. He developed the first barcode sequencing-based approaches to map neuronal connectivity, increasing throughput of single-neuron mapping by orders of magnitude and opening the door to single-cell comparative connectomics. He complements these barcoding approaches by in situ sequencing of barcodes and genes. Leveraging these technologies, his team asks questions including: How do new brain regions and connections evolve to support new computations? What are the organizing principles and fundamental circuit motifs of the vertebrate brain? And how do drugs of abuse and neurodevelopmental disorders break these principles? His work is highly interdisciplinary, residing at the interface of molecular engineering, neuroscience, synthetic and evolutionary biology, genomics, virology, and computational biology.

Assistant professor, Biochemistry, Cellular, and Molecular Biology Graduate Program

Animals, including humans, have stable relationships with communities of microorganisms collectively referred to as the microbiota. These communities profoundly influence the biology of their hosts, impacting host features such as immune function, metabolism, and even so-called "higher" traits such as cognition and social behavior. Due to the wide range of microbiota-associated effects on host biology, understanding host-microbe relationships is not only important for understanding the normal physiology of the host, but may also allow us to use the microbiota to intentionally shape host health. Jonathan Lynch focuses on several areas of host-microbe symbiosis, ranging from the fundamental features that govern these relationships to the translational prospects of using the microbiota to improve human health. This includes roles of intestinal bacteria in shaping host lipid and cholesterol metabolism, interactions between the microbiota and neurotransmitters, and the biophysical drivers of microbial colonization. He employs diverse techniques from molecular biology, biochemistry, and a variety of -omics platforms to explore our interactions with our microbial partners.

Assistant professor, Department of Physics & Astronomy

Yahui Zhang works on theoretical condensed matter physics, which studies quantum materials with novel emergent properties due to the collective motion of many electrons. The current focus is in the following two directions: exploring new platforms for high temperature superconductor, for example, in bilayer nickelate material and in multilayer optical lattice; and engineering exotic fractional phases of matter in moire superlattices formed by twisting two sheets of two dimensional materials such as graphene.

View post:

Five Johns Hopkins scientists named Sloan Research Fellows - The Hub at Johns Hopkins

Read More..

Jonathan Mayer Becomes Inaugural AI, S&T Chief of DOJ; Merrick Garland Quoted – Executive Gov

TheDepartment of Justice has appointed its first-ever chief science and technology advisor and chief artificial intelligence officer, Jonathan Mayer.

He comes from Princeton University, where he is an assistant professor at the institutions Department of Computer Science and School of Public and International Affairs, the agency said Thursday.

Jonathans expertise will be invaluable in ensuring that the entire Justice Department including our law enforcement components, litigating components, grantmaking entities, and U.S. Attorneys Offices is prepared for both the challenges and opportunities that new technologies present, said U.S. Attorney GeneralMerrick Garland.

In his new role, Mayer will work under the Office of Legal Policy, leading technical and policy experts in developing the departments understanding of technology in various fields, such as cybersecurity and AI.

As chief AI officer, he will oversee the Emerging Technology Board, the DOJs new division for governing novel innovations across the agency, and oversee collaborative work on AI and related issues.

Interested in learning more about how the U.S. government is approaching AI? The Potomac Officers Clubs 5th Annual AI Summit on March 21 will bring together public and private sector leaders to discuss the most pressing challenges and exciting opportunities in federal AI adoption. To learn more and register to attend the event, click here.

Read this article:

Jonathan Mayer Becomes Inaugural AI, S&T Chief of DOJ; Merrick Garland Quoted - Executive Gov

Read More..

An MIT professor folds together origami and computer – The Boston Globe

Age: 42

Making a living: Demaine is a professor in computer science at MIT. All the art that I make is related to the mathematics and computer science that I do, he said.

Originally from: Halifax, Nova Scotia

Lives in: Cambridge, with his girlfriend and his father.

Studio: MIT Computer Science & Artificial Intelligence Laboratory

How he started: Demaine got his PhD at 20, when computational origami was emerging. MIT hired him soon after. We entered origami like, Heres some cool geometry problems to solve that no one had solved before, he said.

Get Nightmare in Mission Hill

A limited-series newsletter about the untold story of the Charles and Carol Stuart case.

My dad remembered a problem that he read about in Scientific American called the fold in one cut problem that no one had thought about in this new discipline, Demaine said. The question was: What shapes can you make by folding, one straight cut, and unfolding?

At 22, he was awarded a MacArthur Fellowship.

What they make: Curved crease origami. When youre just folding along straight lines, its pretty clear what happens, Demaine said. When you fold along curved lines its not even obvious whats happening geometrically.

After about 20 years of studying folded curves, theyve made some progress.

We still dont have good algorithmic design tools, he said. All the design were doing is by hand.

How they work: They fasten watercolor paper to a wood slab. Using a ruler rigged like a compass and outfitted with a burnisher, they score curves in the paper.

We do half the creases on one side, half the creases on the other, so they end up being alternating mountain and valley, Demaine said. Then we very carefully fold along all the lines.

Advice for artists: Cross disciplines and collaborate.

We get stuck on a science problem and that inspires a new sculpture, or we get stuck trying to build a sculpture, said Demaine, and that leads to new science.

As for collaborating, the Demaines rely on an improv principle known as Yes, and. Never meet a suggestion with a no; instead, build on it.

It leads to more positive conversations, Demaine said, which makes you more likely to want to keep working together.

Erik and Martin Demaine have origami on view at Mobilia Gallery through March 16. 348 Huron Ave., Cambridge, through March 16. 617-876-2109, https://mailchi.mp/mobilia-gallery/erikmartindemaine2024?e=36deb4f9e2

Cate McQuaid can be reached at catemcquaid@gmail.com. Follow her on Instagram @cate.mcquaid.

Go here to read the rest:

An MIT professor folds together origami and computer - The Boston Globe

Read More..

SPONSORED POST: Bitcoin: A Spark of Decentralization And Individual Freedom In The Digital Age – SaharaReporters.com

In the ever-evolving landscape of technology and finance, Bitcoin stands as a beacon of a new paradigm: a decentralized, digital currency challenging the status quo of traditional financial systems. Its emergence has sparked fervent debate, captivating supporters with its promises of individual empowerment and freedom, while raising concerns from established institutions wary of its disruptive potential. This essay delves into the core of Bitcoin's essence, exploring its role in promoting decentralization and individual freedom, and highlighting the complexities and nuances surrounding its impact.

The very essence of Bitcoin lies in its decentralized nature. Unlike traditional currencies controlled by central banks and governments, Bitcoin operates on a peer-to-peer network, where transactions are validated and recorded on a public ledger known as the blockchain. This eliminates the need for intermediaries, empowering individuals to hold and manage their own finances without relying on traditional gatekeepers. This decentralization translates to several benefits:

Censorship resistance: Transactions are virtually impossible to block or manipulate, safeguarding individuals from financial censorship exercised by governments or institutions.

Transparency: All transactions are publicly viewable on the blockchain, promoting accountability and reducing the risk of fraud.

Resilience: No single entity controls the network, making it resistant to failures or attacks on any specific node.

These features empower individuals, fostering a sense of financial freedom previously unimaginable. Users gain control over their wealth, free from the limitations and restrictions imposed by traditional financial systems. They can send and receive funds globally without intermediaries, transact privately or pseudonymously, and store their wealth independently of any central authority.

However, Bitcoin's impact is not without its complexities. While decentralization brings benefits, it also presents challenges:

Volatility: Bitcoin's price fluctuates significantly, making it a risky investment and potentially unsuitable for everyday transactions.

Scalability: The current network faces limitations in transaction processing speed, hindering its widespread adoption for mass payments.

Environmental impact: Mining Bitcoin requires significant energy consumption, raising concerns about its sustainability.

Furthermore, the decentralized nature poses regulatory headaches for governments accustomed to controlling traditional financial systems. Concerns surrounding illegal activities and the potential for financial instability have led to varying degrees of regulation and scrutiny aimed at Bitcoin and the broader cryptocurrency space.

Despite these challenges, Bitcoin's influence on the evolution of finance and technology is undeniable. It has ignited a global conversation about decentralization, individual freedom, and the future of money. As the technology matures and solutions to its shortcomings emerge, Bitcoin's potential to reshape the financial landscape and empower individuals remains significant.

See more here:

SPONSORED POST: Bitcoin: A Spark of Decentralization And Individual Freedom In The Digital Age - SaharaReporters.com

Read More..

Why Microsoft is spending billions on AI and cloud computing in Europe – ITPro

Microsoft has announced plans to quadruple its spending on cloud computing and AI infrastructure in Spain, part of a wave of investments the tech giant has been making across Europe in recent months.

The software giant said it is going to invest $2.1bn in AI and cloud infrastructure in Spain by 2025. Its Microsoft's largest investment in Spain in its 37 years of operating there.

As part of the move, Microsoft will open a data center cloud region located in Madrid, and plans to open another data center in Aragon that will serve European companies and public bodies. Across both these two sites, Microsoft said it will provide its entire range of AI offerings to Spanish and European companies.

These investments can make quite a difference even outside of the technology sector. The data centers in Spain could add 8.4bn to Spanish GDP and contribute to the creation of 69,000 jobs in the period up to 2030, according to figures from tech analyst IDC.

Microsoft said it will work with the Spanish government on responsible AI and support industry and government cyber security, for example, by working with the Spanish National Cybersecurity Institute and offering access to telemetry and information on potential threats and cyber attacks.

Its certainly not the first big cloud and AI investment Microsoft has showcased recently.

Earlier this month, it unveiled plans to invest 3.2bn in Germany, which again will be its largest investment in the country in its 40-year history of operating there.

The tech giant plans to expand its data centers for cloud and AI applications and train more than 1.2 million people in digital skills by the end of 2025. Over the next two years, the company will expand its cloud region in Frankfurt am Main and its newly planned infrastructure in North Rhine-Westphalia.

Microsoft said German companies and startups using its AI service include Siemens, which has developed the Siemens Industrial Copilot with the Azure OpenAI Service, and Commerzbank which is developing an AI banking avatar using Microsoft Azure AI.

But thats not all.

In November 2023, Microsoft announced it will more than double its data center footprint in the UK, with plans to spend 2.5 billion over the next three years and bring 20,000 of the most advanced GPUs to the UK by 2026.

As part of the spending again the single largest investment in its 40-year history in the country - Microsoft said it will grow its data center footprint across sites in London and Wales, with potential expansion into northern England.

At the time, Microsoft said it would also be extending its Accelerating Foundation Models Research (AFMR) program to include prioritized access to GPUs for the UKs science and research community, and train one million people with the skills they need to build and work with AI, including the first Professional Certificate on Generative AI.

So why is Microsoft investing so much, so fast? Part of the answer is the sheer demand the tech giant is seeing from European customers.

While cloud computing is not growing at the same incredible rates compared to previous years, for most organizations the direction of travel is still only towards the cloud.

Worldwide spending on cloud computing is expected to top $1.3tn by 2027 according to predictions from analyst IDC. Microsofts Azure accounts for about a quarter of cloud infrastructure spending, according to Canalys, and that means building a lot of data centers to keep up with demand especially if the AI boom continues because.

Where those data centers are matters, too, and is likely to get more important.

Data sovereignty is an increasingly big issue for all sorts of organizations. That means making sure their data is being kept in the right physical place to make sure that organizations are meeting data protection obligations and ensuring that data is protected from the prying eyes of other governments.

According to tech analyst Gartner, by 2028, half of multinationals will have a digital sovereignty strategy, up from just 10% now. That means having data centers wherever the customers are, because they will increasingly want to keep their data close, too.

See the article here:
Why Microsoft is spending billions on AI and cloud computing in Europe - ITPro

Read More..

Use cloud threat intelligence to protect critical data and assets – TechTarget

Many organizations now store more sensitive data and assets in the cloud than on premises -- and attackers have taken notice. Organizations need to know the threats attackers in the cloud pose. One way to keep abreast of potential attacks is using cloud threat intelligence.

Threat intelligence involves the collection, classification and exploitation of knowledge about adversaries. Teams collect security intelligence data from a variety of sources, including logs, security controls and third-party threat intelligence feeds, and then analyze that data to mitigate risks.

As the cloud becomes more ubiquitous, it must become an integral part of the threat intelligence process. Security engineering and operations teams should dedicate time and resources to the development, collection and implementation of cloud-specific threat intelligence.

Organizations can collect cloud-specific threat intelligence from several external sources, including cloud service providers (CSPs), threat intelligence providers and managed security service providers.

Security teams need to develop both strategic and operational threat intelligence. Strategic threat intelligence involves executives and nontechnical stakeholders shaping risk management decisions.

Examples of strategic cloud threat intelligence include the following:

Operational threat intelligence is more tactical in nature. It helps inform security operations center (SOC), threat hunting, DevOps and other technical teams.

Examples of operational threat intelligence include the following:

To effectively implement cloud threat intelligence, organizations need the proper team and technologies.

A cloud-focused threat intelligence team should, depending on an organization's size and capabilities, include the following primary participants:

Secondary participants might include internal risk management teams and executive leadership. Third-party analysts can also provide threat intelligence and cloud security insights.

To facilitate building a base of consistent and usable cloud threat intelligence, organizations should implement and monitor the following technologies:

Security teams should define use cases and develop integration playbooks that make collected data actionable. This helps make informed risk decisions and enables more accurate and targeted threat hunting and response investigations. Building a dashboard of risk changes detected and monitored over time can also help distill cloud threat intelligence into metrics and KPIs for executives.

Dave Shackleford is founder and principal consultant with Voodoo Security; SANS analyst, instructor and course author; and GIAC technical director.

Read the original here:
Use cloud threat intelligence to protect critical data and assets - TechTarget

Read More..

Securing Kubernetes in a Cloud Native World – The New Stack

Kubernetes has revolutionized the way cloud native applications are deployed and managed, but how can you mitigate those weak links in cloud environments?

Simply put, cloud native means building, deploying and managing your applications in cloud computing environments. Applications that are born to live in the cloud tend to be resilient, portable, easily scalable to meet the ups and downs of demand, and easy to update as needs change. Indeed, being cloud native means apps can be changed and updated quickly and frequently, with no impact on service delivery. Apps can be developed and optimized quickly, and then undergo continuous improvement based on user feedback all at the speed of business.

As the adoption of cloud native applications increases, Kubernetes has emerged as the go-to container orchestrator for many organizations. It automates the deployment, scaling and management of containerized applications, making it an essential part of modern DevOps environments. However, as powerful and prevalent as Kubernetes is, ensuring its security is a non-trivial task. With built-in security features and a growing market of third-party tools, creating a secure Kubernetes deployment requires careful planning, diligent implementation and ongoing management.

Securing your Kubernetes deployments requires a holistic and integrated approach from the earliest stages in the development process. Begin by hardening your infrastructure and host operating system to minimize potential attack vectors. Container images should always be vetted and secure before they are deployed.

Kubernetes includes an array of native security features, including role-based access control (RBAC), network policies and secrets management. RBAC is a fundamental tool that allows administrators to define roles and bind them to users or groups of users, allowing granular control over who can access and modify resources within the cluster. Network policies offer another layer of protection, providing control over how pods communicate with each other and other network endpoints. Secrets management helps in securely storing and managing sensitive information like passwords, tokens and API keys, and allows secrets to be stored and managed centrally within Kubernetes.

Regular and continuous scanning of container images for vulnerabilities is critical to preemptive threat management. To maintain the integrity of containerized applications, signing and verification processes before deployment are also essential.

As the methods of malicious actors evolve, real-time threat detection systems can act as the last line of defense. These systems let you continuously monitor your Kubernetes environment to instantly identify and respond to threats, ensuring that your containerized landscape stays secure.

Successfully navigating Kubernetes security isnt just about setting up your security program correctly; its an ongoing commitment. The path is riddled with challenges, such as properly configuring Kubernetes, securing container images, managing secrets and ensuring runtime monitoring. Perhaps the most demanding aspect is the need for continuous visibility over the full life cycle of Kubernetes deployments to detect misconfigurations and vulnerabilities promptly.

To achieve this, runtime container security requires agentless scanning across the full stack, including the container, cloud and workloads. Image scanning of running containers and container image registries is vital in this process.

Ensuring long-term security for Kubernetes deployments underlies the need for robust strategies. Regular updates, correct configuration, vulnerability scanning and strict adherence to best security practices are the cornerstones of a secure Kubernetes environment. Likewise, understanding and monitoring industry and regulatory rules is vital for Kubernetes security, ensuring compliance and avoiding data privacy issues.

Changing security regulatory standards make it vital for organizations to keep their Kubernetes deployments compliant. This eliminates various risks including security vulnerabilities, noncompliance penalties and system inefficiencies.

Despite its importance, maintaining compliance is not without challenges. First, the dynamic nature of Kubernetes deployments makes it difficult to track and manage all resources effectively. Second, a lack of visibility into configurations can result in noncompliant setups. Third, manual compliance checks are tedious, error-prone and dont scale well with the increase in Kubernetes clusters.

To meet these challenges head-on, there are several strategies. Automating compliance checks saves time and reduces errors, while introducing uniform policy enforcement across all deployments ensures better control and traceability.

Integrating compliance into the CI/CD pipeline allows for early detection of noncompliance issues, and thus easier remediation. Using these strategies ensures compliance and helps optimize the overall performance of your deployments.

Your organization must watch over your containerized applications, which are vulnerable to all kinds of exploits and threats. Identity and access management are your responsibility, along with all the various configurations, encryption, network traffic protection, segmentation and other details. Adopting industry-grade security best practices can significantly enhance your Kubernetes security profile. The following 10 best practices should guide your Kubernetes security program:

Kubernetes security is a complex but manageable challenge. Organizations can navigate the cloud native world securely by starting with a strong foundation, correctly implementing isolation and multitenancy, securing containers throughout their life cycle and fostering a culture of security.

Continuous monitoring and using the right tools further ensure that the Kubernetes environment remains resilient against evolving threats. As cloud native technologies continue to advance, staying informed and adaptable is key to maintaining a secure Kubernetes ecosystem.

To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe, in Paris, on March 19-22.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Read more:
Securing Kubernetes in a Cloud Native World - The New Stack

Read More..

Why your GenAI investments must start with optimizing cloud strategy – TechRadar

Generative AI has gained widespread consumer interest and acceptance over the past year and implementing this technology is now a critical consideration within many sectors. In fact, economists estimate almost $200 billion could be invested globally into generative AI technology by 2025.

Across the global financial sector, despite overall caution being observed for generative AI in the short term, it clearly has the potential to accelerate banking models and redefine how clients and financial institutions interact. As financial services firms are treading cautiously in exploring its application, many need to address a platform engineering problem before they can experiment and evaluate the full potential of this disruptive technology.

Cloud is the engine that can power generative AI ambitions for any organization. As the financial services industry begins to deploy this technology across automation, coding, customer service and a variety of other business cases, the adoption of this transformative technology necessitates extensive data volumes, substantial computing power, sophisticated security measures, and swift scalability. All proven advantages of cloud.

Social Links Navigation

Global Head of Cloud for Financial Services at Capgemini.

Financial services organizations are making significant cloud investments to support their deployment of future technologies like generative AI at scale. Recent research reveals 91% of banks and insurance companies have now initiated their cloud journey. Thats up significantly from 2020, when only 37 percent of firms had started the process.

However, cloud investments to date have tended to focus on customer-facing applications such as self-service portals, client onboarding tools, and transaction-based operations, including payments. More than 50 percent of financial services executives said their organization had not yet migrated core business applications such as risk management and compliance systems, data management, and core banking and insurance systems. Since many of these back-end systems provide critical inputs into customer-facing applications, the outcome is often a poor overall user experience.

Similarly, 80 percent cite delivery of a superior customer experience as key towards overcoming current business challenges and driving growth. Firms will need to reimagine their data, processing, and consumption strategies. Enterprises will need to modernize their platforms beyond just moving data to the cloud as-is and instead rebuild them to handle small and big data, structured and unstructured data, and finally batch and stream data. Without good data, the application of AI findings is restricted since it serves as the foundation for many cutting-edge technologies. With the current framework in place for many organizations, generative AI may magnify the problem because, to be truly effective, it must have insight into all company operations. And today, the best way to achieve that is to make all operations cloud native.

Every firms cloud journey is unique but there are common elements that must be addressed to be successful.

To start, companies must clearly identify the outcomes they want to achieve. This means creating a well-defined business strategy for cloud, and a phased roadmap that identifies priorities for implementation. From personal experience, I can tell you that every technology firm working in cloud can provide many examples of enterprises that have struggled in their journey, where the problem could be traced back to a vision statement for cloud that did not focus on business value.

Organizations must also have the right skills including those around cloud and cloud security experts to oversee the creation and operation of the platform, the migration of data to it, and the deployment of cloud-powered solutions. Otherwise, having a cloud platform will be like having a Formula 1 race car but no driver or, worse, an unskilled driver with a learners permit. If a firm does not have the required talent in-house, it must seek professionals who understand both cloud and the needs of the sector.

Importantly, when attempting to migrate to the cloud, many financial services firms face two technical roadblocks. Most organizational processes are not re-engineered for cloud, and many core systems continue to rely on legacy technologies in discrete silos. To address this, firms should engage with a strategic partner that has established links with the worlds leading cloud solutions providers and has created cloud platforms customized for their sector. Ideally, these will be composable platforms that use pre-built modular components to easily integrate front-end and back-end functions. A modular approach provides enterprises with the flexibility to develop a roadmap that prioritizes functions for migration based on their business strategy. It also enables the cloud platform to evolve, helping to future-proof the investment.

The introduction of any new technology will always be disruptive whether its generative AI, quantum computing or something nobody has yet envisioned. What we know today is that legacy architectures will need to undergo a reboot to be relevant in the generative AI era. Therefore, it makes sense to remain focused on value creation, and let that objective determine how the enterprise will leverage new technology.

At the same time, its important to recognise that becoming cloud native is essential to gain competitive advantages through generative AI and other innovations. As one industry executive shared: Without cloud, there is no future for financial services firms.

We've featured the best cloud computing service.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Read more:
Why your GenAI investments must start with optimizing cloud strategy - TechRadar

Read More..

Beyond Micro Frontends: Effective Composable Decoupled Applications on Cloud Native Infrastructure – InfoQ.com

Transcript

Venditto: I want to tell you a story that is all about decoupling a frontend application. I hope that when I share my story and my learnings, and by the end of this talk, you'll have a new perspective on building robust and scalable applications that can meet the demands of modern cloud environment, but also of the modern frontend ecosystem, and that you can do it from a frontend perspective. You'll see soon what I mean. In 2018, I was working as a frontend technical lead for a very large platform that had undergone many transformations over time, and that had been maintained and developed by multiple vendors. As you probably know, when a large development project is managed by different vendors over time, it's not uncommon for technical debt to pile up usually because the original ideas are abandoned, or they're misunderstood. That may end up end up transforming that project that was once a well-architected monolith, into a bloated and inefficient mess.

I don't know if you've been there too, and you know what I'm talking about. Certainly, most projects degrade over time. When they're handled by different teams, they do even more so. In this particular case, at some point, the effort required to analyze and fix bugs became increasingly unfeasible, and the cost of developing new features exponentially outweighed any potential benefits. Our team concluded that it was time to migrate. We had, at that time, two potential directions to explore, starting from scratch, or decoupling and modernizing the existing platform we were dealing with. If you look at the stats on screen of the platform I was working on, you probably already realized that the only feasible approach was to start splitting things apart, rebuilding them little by little, and recomposing. This story may resonate with you and it's not unique to frontend development at all. Teams worldwide are constantly lifting and shifting, or adopting microarchitectures to decouple services gradually. What may be novel about this talk is that I'll propose a different approach than the one you're used to.

Prior to the time of the event I'm telling you about, as a frontend tech lead transitioning to software architect myself, I had very limited knowledge of monolithic coupling. My understanding was that the only approach was to begin with the backend or services first, with frontend experts often brought in later to review specifications, to take a look at the designs, and align them with business requirements, but never early enough. The frontend was treated as a second-class citizen. Even when we succeeded splitting monoliths in the backend and adopting microservices approach, the frontend always remained a significant monolith. It's important to clarify that if a project has a strong modular strategy, and it also has a solid component library or design system that serves as a frontend monolith, or in a monolith fashion, that isn't necessarily a bad thing. In many cases, this approach is effective or even more effective than a micro-frontend strategy. However, in that particular case I'm referring to back in 2018, when we were decoupling this very tightly coupled content management system that was a very large deployment as well, it made more sense to split the new requirements into a decoupled frontend application. The requirement I'm talking about was a new blog capability. As we analyzed our platform to find the best way to add new features, it became increasingly clear that we had some user experiences that were relatively static in the frontend, while others were much more dynamic. You see, we had a homepage, we had the e-commerce, we had the user page, the landing page, all of those had different characteristics. Additionally, some of those experiences were updated more frequently from an authoring perspective, while others were being constantly authored. This realization was a critical turning point in our approach to monolithic coupling.

We started thinking about how users would interact with the page of a blog, and what their needs were, rather than just how the backend and the frontend were connected. By doing this, we were able to identify the specific functionality and data that the blog page required. We could build it using the appropriate tech stack and architecture in a decoupled way. We didn't need to be constrained by the existing architecture or technology choices made for the rest of the platform. This user-centric approach allowed us to create a tailored solution, which ultimately led to better user experience, not only for the blog, but every other capability as we were decoupling them from the monolith. In the end, we ended up deciding that the architecture would look like this, like what you see on screen. Of course, this is a very simplified, high-level overview. In general terms, it was our end goal. In closer inspection, what you see here is we have a container or a large container, where we deploy our main CMS application and additional containers where we're going to be deploying microservices as we split them apart. Of course, the orchestration tool here is not represented for the sake of simplicity. We do see we have another service there to the right, in blue, where we are deploying the blog capability, that then hits a function that then is routed to the data storage through an API gateway.

Now you're wondering, how do we accomplish this for real? This is just a high-level overview. How do we get this done and materialized? How do we think about it as frontend developers? Is it possible to participate of the architecture design and push for a user-centric approach to decoupling? It is if the end goal for everyone in the team and for every architect in the team is to build user experiences, and not just applications. By keeping a user-centric focus and considering cloud native opportunities, we can design fully distributed user experiences that solve business problems individually and incrementally in a way that allows an independent team to explore their options and make their own decisions and map them to a particular use case. That allows us to go from architectural models that are built to support the backend logic, or that are application centric, in essence, to using an API-first approach that made the API surface the center and the heart of the system, in a way that pumps life and data from end-to-end, and makes every component on each end of the stack pluggable and replaceable. By taking this API-first approach back in the day, each frontend for us became just another client of our API that you see there in red. It didn't matter what technology we were building each frontend with in the end, as long as the API contract was well-defined, any type of client can consume the data and the functionality provided by the API.

In the scenario I'm describing to you, we started with the addition of this new blog capability, and ended up with a new definition that allowed us to build and evolve each frontend independently from their own without having to worry about the impact on other components in the system. By using cloud native technologies, additionally, we were able to scale that API horizontally and make it highly available, ensuring that it could handle any amount of traffic and load. You see with this approach, the API surface becomes a central connection point between the frontend and the backend. It allows for flexibility because you can continue to plug more frontends as you go, and agility in development in the end. The development process becomes a lot more agile. Overall, adopting this user-centric approach to frontend development and to API-first architecture helped us build more effective and scalable solutions, all in composition. You can see that here we are having a different architecture approach for the homepage that was part of that main CMS, than we do for the blog page that in the end became a static site generator that was hitting against, again, this serverless function in origin and requesting data at build time to be rendered. Then we decided later on that we may use a hybrid approach for e-commerce, and a server-side rendered approach for the user page. We could integrate more innovations like edge computing, and requesting data on the fly.

My name is Natalia Venditto. I'm a Principal Program Manager at Microsoft. I'm leading the end-to-end experience for developer tools and services in Azure, for JavaScript and Node.js developers.

There's also one more thing I want to delve into, and it's why we decouple composable systems. Let's pause for a moment and ponder why we do it. We always decouple to satisfy some organizational performance strategy, or to better organize the capacity of our teams with respect to the business units and their needs. At the end, we may be satisfying non-functional requirements as well and end up with a better technology stack. That's never the main reason why we decouple. Typically, one of the most important requirements that is about organization, and that ends up impacting in a positive way each decoupled part, is the ability to release and deploy application components, services, or modules independently of one another. That obviously has many advantages, like faster time to market, reduced coordination overhead. With independent deployability, teams can release new features and functionality more quickly, and scale also individually. Services don't have to wait for another service to be integrated into a main branch, for example. These results are giving more agile teams faster response time to customer needs, and high-quality applications that are developed and deployed more efficiently. Let's now ask and answer some essential questions so we can proceed with design and execution. We need to ask ourselves, of course, what are micro-frontends? Secondly, we need to identify our user base. Thirdly, we must understand API-first from a user point of view or user experience point of view. Last but not least, we need to become acquainted with the cloud services mechanisms and infrastructure that are relevant to our work as frontend engineers.

Through my talk, I will avoid as much as possible the term micro-frontend, because we're speaking about rich decoupled applications in the cloud. Then I'm going to refer to them as composable decoupled frontends. What are those? They are pluggable and exchangeable frontend applications that have connectors and hatches to share state and integrate via a dedicated vertical integration surface as part of a much larger deployment system, typically in the cloud. How are they built? Micro-frontends or composable frontends can be integrated into the system using two types of splits, horizontal, which is multiple applications in one page load, or loading and being bootstrapped in one page load, or vertical, which is one application that usually maps to a URL or a page load. Let's think about the horizontal split, orchestrated with a tool or a technology like single SPA or any other technique, like for example, module federation that is useful at runtime, and integrating multiple frameworks in one page or view, like we mentioned. Or we can be using also a single framework and be leveraging island architecture, a technique we'll discuss, or we can be using web components and mix them with other frameworks using iframes, for example. They can be as tiny as a button connected to a serverless function, and triggering an HTTP request to fetch and display data, or it can be a whole catalog, a shopping cart. The layout of a page can be managed as a micro-frontend or decoupled frontend. It can be a fully-fledged application capability that is integrated in a view together with other applications. As you have noticed, they can be micro or they cannot be micro. This is why the term micro-frontend doesn't resonate with me too much.

On the other hand, a vertical split can also have combined elements and they can radically differ in size and implementation. What we can demonstrate here is that that vertical pattern, where the micro-frontend loads entirely in one URL, or page load, or view, and the horizontal split, where we have multiple micro-applications orchestrated in one view, or route or page are not mutually exclusive, and they can be combined. That a micro-frontend does not necessarily map to a microservice one-to-one, or has a single concern. For example, if our vertical split that represents, in this case, let's imagine a search capability, would have querying and filtering and data representation features, and that same capability would integrate multiple services like a catalog and an authentication service or aggregate data sources. This is why the term micro-frontend feel less correct than composable frontends, because if you think about it, we are composing multiple applications that may vary in size.

I know that many frontend developers are not familiar with techniques or methodologies to make architecture or technical decisions or definitions. We have data that says that frontend engineers are most likely to come from different backgrounds that includes computer sciences, but they may come from any other background, other engineers as well, but frontend developers are the most likely to come from a different background. They may not have the baseline to be able to produce a decision matrix for composability. I will propose a methodology that we can use to influence our teams and architects in our teams, to architect for the user or for the user experience, and to build frontend applications in composability. I will offer you this framework that is based on three focuses, design, development, and delivery to ask and answer a series of high-level questions that when answered will guide you through making decisions to build frontend applications as cloud native components. When you have a decision matrix, you will be ready to participate in architectural discussions with solid argumentation to your favor.

For that, we will establish three categories as we mentioned before, that will be revealed when we color all slots. We will start with the purple slots, and for that we will take the question in which way as our starting point. We already answered this question before, we want to take an API-first approach. What does an API-first approach mean when we are architecting for the user? There are some proponents who prioritize designing the API specification before the user experience and frontend designs are completed. However, with this approach, that reasoning may conflict with a user experience focus type of implementation, which prioritizes starting with an aspirational visualization of data and the desired state in the frontend, and then later specifying and executing APIs with the user in mind. In other words, when we design for the user, we always begin with a clear understanding of how we want the frontend to look and function. Then we walk backwards to create the API to support it. If we've always been focusing in frontend development, writing an API specification and implementation may be challenging, and one tool that can help with designing APIs in this way is the OpenAPI spec. A specification for building APIs that is machine readable and human readable, and can be used to generate documentation code and other artifacts that are going to be useful later in the cloud environment. By using OpenAPI, you can more easily map the frontend functionality to specifications, as well as validating and testing them to ensure they are working correctly. There are other important aspects of designing a good API, for sure. As a frontend developer, you may be familiar with querying different types of APIs, RESTful, GraphQL. The API is the heart of this type of system, and we need to really focus to choose wisely our pattern.

Building great APIs that allow the system to connect the user interface with data storage requires also some knowledge of who we are building the user interface for. This is absolutely not a trivial thing. For whom may be the most important question to answer. It's not actually only for whom, it's rather for whom, that is where and browses how, and for what reason are they browsing? When we design and make decisions for user interfaces, we deal with the unknown. We are not only designing for people with different abilities and needs, we are designing for people who use a diversity of devices by preference or because that's what they can afford. With very different capacities, and even screen sizes and resolutions, or may not even have a screen. It's assistive technology that is talking to the user. Not only that, when we design for enterprise, platforms tend to be really large and solve a variety of problems in a centralized way, like we saw before. The user that comes to our platform, let's imagine it's a bank, to read the blog that we were describing earlier, this blog capability. To, for example, learn about stock prices, or find a contact number, does not have the same needs and expectations and urges sometimes as the one who visits to do online banking. There are variables unknown, like mentioned, but we also have a lot of data, it's no longer a surprise that most internet users browse from their phones and not from their desktops. That global sales for smartphones, although in decline for the last couple of years, surprisingly, they have grown by a flipping 1000x, from 2007 to 2021.

What may come as a surprise to many frontend developers is that only 28% of the global population has an iPhone. It doesn't even mean that is the latest iPhone. Most developers I know and even myself that are working from EMEA and the UK or the U.S., they develop and test on a brand-new iPhone. The computers we use for development in these regions tend to be high-end, and the connection speed, the best one can have. It's important to remember that the synthetic testing results we attain when testing our development on these super-fast machines and connected to 5G, or fiber, are not what the average end users globally will get. The average low-end phone that most users will be browsing at a given time, is very likely to be a $200 Android device. Most of those are not even 5G compatible. Since we are speaking about 5G and internet speed, we have to remember also that this user base is also probably scattered around the globe, with different internet access rates and constraints. Sometimes ourselves and even our customers are convinced that their customers or their user base is only in Europe or the States or in places where 5G and fast connection speeds are available. Cloud providers are moving workloads and execution to globally distributed content networks with points of presence around the globe. That makes any application accessible everywhere. We need to be prepared to consider that there are remote users, and those remote users can be an opportunity for expansion and new market possibilities for our customers and ourselves.

Now we have the UX and UI settled, that we have discussed the specification and the user experience or knowing the for whom so we can better design user experiences. On top of that user experience and user interface design, we want to deliver a great application experience, we can already start making technical stack decisions. Those technical stack decisions help us decide with what we're going to be building each decoupled application for our users. What are the most effective patterns and implementations that will help us deliver with performance in mind? Remember that everything goes back to delivering a good user or a great user experience. That's the domain of the frameworks. Also, it's the domain of the web platform. This is where we need to know what the latest advancements our ecosystem is working on to help us build better applications, while we respect the constraints of the users we may have at a device or connectivity level. Why? Because, again, performance matters and those numbers matter. They matter because a lot of the most reactive frontend experiences of today are probably experimenting a bounce rate increase of between 32% and 90%. That's a lot of money lost for our enterprise customers. Interestingly enough, Google tells us that the slowest industry to load pages is the financial industry. If you're working in the financial sector, you know that to roll out worldwide you need to know your user base, and all their constraints as we proposed before.

A lot of the times the largest negative impact to runtime performance is the amount and size of assets we request during page load. When we have a user-centric approach in mind, and we use it to provide a solution per use case, we can avoid shipping code and assets over the wire that are not meaningful to that specific use case, or that are not relevant to that specific use case. Like we said before, maybe the banking experience needs a lot less static resources or assets than we ship with a blog or landing page. Knowing our user and industry performance benchmarks will also help us make the best technical stack decisions. For that we need to know what's in store? We need to know what the modern frameworks are, because those decisions need to be linked to a strategy that satisfies performance budget, like we mentioned before, and caters to all users. The frontend ecosystem is very dynamic in nature and is constantly working to improve. Sometimes it's working to solve problems we have introduced ourselves. That's another reality. Do you remember this slide where we explained horizontal split and how it could be multi or single framework? At the same time, we decide how we're going to design for a particular frontend solution for a specific use case with a user-centric composable approach, we will have to decide a render strategy, and potentially a reactivity pattern. We will also have to deal with making decisions that are not very easy to make. I think that the most challenging aspect of composing decoupled application is dealing with state management and routing. Although, obviously, data fetching mechanisms are very close behind.

What we want to do, is we want to use HTML-first and zero JavaScript frameworks, when possible. We want to leverage the platform APIs and reduce third-party code and dependencies. We want to defer or async load all render blocking JavaScript, particularly keep the critical rendering path lean. We want to define and respect performance budgets, and obviously follow JavaScript best practices, like named imports, so we can optimize code at build time, or compilation time. What are those new generations of frameworks that I was referring to earlier? These frameworks come equipped with mechanisms to leverage modern rendering patterns and architectures like the island architecture, promoted by many of them, based on the concept of partial hydration. Hydrating, so we have an overview, is a mechanism to bootstrap JavaScript into a completely static HTML render. What we do is we render the HTML, we load it, and then we bootstrap the JavaScript. With that, also, we inject the state and the dynamic functionality. With partial or progressive hydration, we only bootstrap JavaScript to some areas that become highly dynamic and hydrated on the client side at runtime. In frameworks like 11ty, Astro, or Fresh working on top of island architectures, proposed for every one of those islands, every one of those tiny regions to be hydrated independently instead of depending on a shell that controls that mechanism. Also, additionally, most of them serialize state before sending it to the browser, so everything becomes leaner, and there is a lot less execution on the client side.

Another very interesting framework that proposes an even more advanced concept is Qwik. Just like the other options discussed, this framework tries to remove as much JavaScript execution from the client side as possible. Because while a hydration pattern renders all server side and then bootstraps JavaScript to the dynamic regions to inject the state, which may include a potential visual glitch, resumability picks where the server left. Meaning, you execute as much JavaScript as possible on the server side, and then serialize everything and ship it to the frontend, and resume the execution where it was dropped by the server because it needed information that is only available on the client side. The absolute next stage are pure HTML-first frameworks that need no compilation step. Developers, in this case, when using HTML-first frameworks approach, will be writing and shipping the same HTML to the user. Enhance.dev, for example, uses a functional web approach. Most JavaScript computing and execution happens in a cloud function and not in the browser. You probably have noticed here how the connection between the modern frameworks and the cloud is established. Server-side rendering, so things happening in the server. Execution of JavaScript happening in a function. We are moving our intensive operations away from the client and to the backend, in most cases, with the cloud in mind.

Before we also move to the cloud, let's see what other questions we can immediately answer. The final questions in the pink slot are, where in, and to what? If we have already made a technical stack choice, we can probably confidently define the required setup, our code structure and dependency management, since it will be strictly linked to our tech stack decision. When we decide our framework, we can also decide how we're going to lay out our code. How we'll be working with it. What will be our IDE. What will be our developer toolset. We can also start responding to fundamental questions about our integrations, answering the to what. To what are we connecting? What services we need to talk to. Where from are we fetching our data with those amazing APIs we designed? Probably, how will authors create the content? How will we analyze and observe our system to guarantee it is always healthy and performing according to all those benchmarks we established as best practices? This may also be a good time to expand our testing strategy from unit testing to integrations, end-to-end.

This may be also a great time to think about orchestration and optimization with dependency management, tree-shaking, dead code elimination, bundling, compressing, everything to ship better and faster code, and orchestrate better. When it comes to code optimization for bundles that we are going to be loading in composition, most bundlers can only perform static analysis at build time. That makes it impossible to optimize the code of bundles that are independently and remotely deployed at runtime. There is a Webpack plugin called Module Federation, which proposes a mechanism that is based on the concept of having a host runtime, a remote container. Those roles can be interchangeable depending on which runtime you log first, and a shareScope that will allow those different runtimes to share dependencies and perform that static analysis basically at runtime. Aspirationally, this mechanism is very interesting. It may not be possible to fully leverage it if we don't have a very strong governance in the end, because if we cannot align on what dependencies we're going to be using, for example, our framework question, we may end up with version skew. Failing in isolation may not be possible in the end, if we are composing horizontally or in the form of a horizontal split, because when we have precisely multiple applications being loaded in the same view, if one of those fails, it may completely impact the whole experience. If we are architecting for the user, these types of mechanisms need to have a very strong governance and some definitions to be successful.

We're now in the final stages of questions for our methodology to building frontends and composition. The blue slots map to operations. We made a lot of decisions that helped us design, specify, select the technical stack, define integrations. Where are we going to be deploying our composable application to? That still needs to be answered. This is where the cloud knowledge comes in handy. By learning about cloud infrastructure, services, models, we as frontend developers gain a deeper understanding of how our code fits into the larger picture, into the larger system, and how it interacts with other components. This knowledge also enables us to design and architect applications for the user experience that are more scalable, like we mentioned earlier, that are flexible, that are cost efficient. That we can publish and continuously integrate to with relative ease. To determine the appropriate service to deploy our decoupled frontend, we must consider our containerization and container orchestration requirements. However, because containers are a topic that is not very familiar to frontend developers, this does not imply that JavaScript decoupled micro-frontend applications can only be deployed to containers and that you have to know about things like Docker, and that you now need to know how to orchestrate with Kubernetes or anything like this. In certain instances, they can even be deployed to object or blob storage. However, this is not feasible for frameworks that feature server-side rendering and rehydration, like we were explaining before, because those need or require a backend runtime. It's fine for static applications, but when we are consuming and when we are leveraging the server-side rendered component or strategy, we require a backend runtime. If a team building a decoupled frontend does not want to develop and maintain a containerized backend runtime such as, for example, Fastly or Express server for Node.js, they can also use cloud native options as execution context.

However, I would like to emphasize the crucial point that serverless infrastructure is entirely managed, and that it has the ability to scale down to zero. Serverless infrastructure eliminates the need to provision and maintain servers, which is specially appreciated by frontend developers with no infra skills, and those who want to concentrate on writing code while also scaling down to zero when there is no need for computing. At a very minimal, in order to better design applications using meta-frameworks with server-side render or hybrid server-side render plus static site generation capabilities and implement an effective hydration strategy, we as frontend developers should understand the benefits of serverless functions executed in origin, and those that are executed at the edge of the network. What are their pros, and what are their cons, and how they differentiate, are important aspects to understand, especially in order to compose at the edge of the network. Alternate runtimes, such as Wasm, or WebAssembly and WASI, or Web Assembly System Interface shim, also provide the ability to execute code and integrate more closely with the user. The web platform API and cloud event and messaging streaming services enable data streaming, which in turn facilitates the creation of highly dynamic composition, from browser to cloud and cloud to browser. It's worth remembering, though, that fast compute makes sense when there are transfer protocols and infrastructure that are just as fast. As we already discussed, most of the phones our users are browsing with, are not built for speed, or connected to fast enough networks. If we think about 5G global deployment, it has only 25% coverage worldwide. The same coverage as 4G will only be there by 2027. Design with the user in mind and those elements that they have access to, to facilitate the delivery of these very fast applications.

Because we know that data matters, data is everything for our applications, especially when we continue to move forward in the direction of huge amounts of data collections, to an estimate of 200 zettabytes in 2025. We need to learn to work with data in ways that we don't completely deteriorate runtime performance. That can only be done by choosing the right database model to match each use case we're building independently and composing because every database serves a use case or several. A database may fit all of your use cases across a system. It may be used by different teams. You can also consider having multiple of them and perhaps connect them in the system using an event-driven pattern. You can deploy different databases that are connected to a composable frontend, and then basically dump all the data to a sink and consume in an event-driven fashion. Event-driven architectures benefit from data streaming, and event models in the cloud, and also in the browser, so we can use event grids or hubs to produce and consume messages across a whole system. We can use publication and subscription buses, also natively in the cloud and in the browser, like, for example, in the browser, the postMessage API. We can also consolidate endpoints in the cloud with an API gateway, and use those gateways as proxy to validate tokens for browser to cloud, for example. Again, knowing the cloud native infrastructure and what it has to offer is essential to building better composable frontend applications.

At this point, we have almost all the definitions to be successful with decoupled frontends in the cloud. We have made decisions from developer setup to cloud services. The only missing piece is how are we going to go from code to cloud. We have most of the down together, and in this case, because of the low-level nature of provisioning and deployment mechanisms, we will need to decide how we want to make those provisioning and deployment decisions repeatable. When dealing with large and more intricate system distribution and composition, it is advantageous to learn a single declarative and repeatable approach to configuring, provisioning, and deploying the required services and artifacts to operate, secure, monitor, cache, and distribute applications. For small to mid-size applications, the provider may handle that for us entirely, but as the application grows, it is essential to have a comprehensive understanding of the configuration and provisioning and even deployment process. We may just never do this as frontend developers, but there are tools that can significantly help to get started if we want to.

After all this work and all the colored slots are there, we should have a matrix of tools and technologies completed. That matrix will map to all stages of our development cycle. Like we explained before, the focus is for design, development, and deployment or delivery, which in turn makes possible the architecture, development, and operations or DevOps strategy for agile teams, and helps us go from idea to application and to publication in the cloud, especially for highly agile teams working with composability or composable architectures, also in the frontend. If you want to know more about decoupled frontends, the architect for the user experience approach in cloud native, visit my site, https://www.microfrontend.dev.

See more presentations with transcripts

See the original post here:
Beyond Micro Frontends: Effective Composable Decoupled Applications on Cloud Native Infrastructure - InfoQ.com

Read More..

How Simplifying Our Architecture Saved Us Thousands Monthly – The New Stack

This is the second of two parts. Read Part 1: Beyond Terraform: How We Scaled DevOps.

My team is not alone in facing challenges in the quest for efficient DevOps. The complexity of cloud development and deployment are like chains, anchoring many of us down and making it tough for our teams to soar to new and innovative heights.

At Drop Bio Health, an at-home digital health company, we grappled with the unwieldy beast that was Terraform and ultimately decided to revamp our DevOps approach and deployment pipeline using the Nitric framework (described in my previous article). This single action resulted in significant gains in developer productivity and our ability to deliver business value.

As part of this process, we revisited our project architecture and achieved valuable improvements to our deployment capabilities and cost efficiency. (Spoiler alert: We saw a 60% reduction in AWS hosting costs.) Heres our journey from our original architecture to a simplified approach that will serve our team even better as we scale.

The original architecture for our WellBeing test was designed for scalability and agility. We used number of well-known, best-in-class tools.

There was nothing technically wrong with this architecture; in fact, these were configurations we had seen used in other modern cloud infrastructures and heavily documented in how-to guides for achieving scalability, redundancy and flexibility.

The challenge was that manually deploying this required our developers to be extremely familiar with each resource. For instance, the configuration of an Elastic Load Balancer (ELB) involves managing security groups, setting up listener rules, conducting health checks through target groups and integrating SSL certificates. When incorporating Auth0 into the mix, the configuration also required specialized routing and additional middleware for token validation, ensuring that authentication and authorization processes were seamlessly integrated.

Virtual private clouds (VPCs) required correctly setting up Classless Inter-Domain Routing (CIDR) blocks, sidestepping any overlaps and judiciously designating public and private subnets. This required ensuring the security groups were structured correctly to allow only necessary ports and manage ingress and egress rules. In cases where there was a need for communication with other VPCs, managing peering relationships was crucial.

For a company with a limited operations team, the cognitive load grew to become a burden on our efforts to work on our core services.

We needed to make it easier for our team to manage infrastructure and our deployment process. Though we had access to a plethora of tools, we wanted to find a way to achieve our constant goals of scalability, redundancy and flexibility with only the necessary pieces. Could we use fewer tools to minimize maintenance, lower cognitive load and still have confidence in our architectures future-proofing?

As we adopted Nitric for infrastructure provisioning, we not only streamlined our infrastructure, but did so using tools that have been vetted by industry experts. Our new architecture centers our operations around the robust capabilities of AWS API Gateway and Lambda functions.

AWS API Gateway has become a cornerstone of this architecture because of several distinctive features:

API Gateway serves as a single, consolidated entry point for all incoming traffic. This eliminates the need for managing multiple endpoints or load balancers like ELB, as the API Gateway can distribute incoming requests to multiple backend services, including Lambda functions, efficiently.

Paired with serverless solutions like Lambda, API Gateway can automatically scale based on the incoming traffic. This dynamism allows it to accommodate sudden spikes or lulls in demand without any manual intervention, rendering traditional scaling mechanisms provided by virtual private cloud (VPC) and network address translation (NAT) gateways unnecessary.

API Gateway inherently offers robust security features like AWS Web Application Firewall (WAF) integration, throttling and built-in authentication and authorization mechanisms. This negates VPCs requirement for added security layers, as one can directly integrate custom authorizers or platforms like Cognito for user authentication.

With API Gateway, the need to manage intricate networking setups using VPCs or ensure outbound traffic with NAT gateways is significantly reduced. The gateway handles the complex networking tasks, allowing developers to focus on building and deploying applications.

In essence, by centering our architecture around API Gateway and Lambda functions through Nitric, weve not only achieved a more streamlined infrastructure but also capitalized on the intrinsic efficiencies and features these services offer, eliminating the need for components like ELB, VPC and NAT gateways.

Since API Gateway and Lambda functions follow a pay-as-you-go model, there are cost savings compared to maintaining always-on resources like ELB or NAT gateways. Plus, the ability of API Gateway to scale down to zero means no incurred costs when theres no traffic. In our case, this had a significant impact on our monthly AWS hosting costs: Weve cut them by 60% as a result of this architectural shift.

Our transition from manual cloud deployment to a simplified and automated architecture has streamlined our operations significantly. Weve replaced complex components like ELB, VPC and NAT gateways with API Gateway and Lambda functions, resulting in:

Notably, this transition has led to substantial cost savings due to the pay-as-you-go model and the ability of API Gateway to scale down to zero during low-traffic periods. This shift allows us to allocate resources more efficiently, focusing on innovation and customer solutions.

All in all, simplifying our architecture was a big win for our team. If youre interested in reading more about our Nitric implementation, check out the case study and take a look at its technology.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Go here to read the rest:
How Simplifying Our Architecture Saved Us Thousands Monthly - The New Stack

Read More..