Page 71«..1020..70717273..8090..»

Ethereum Co-Founder Vitalik Buterin Decries State of Crypto Regulations as ‘Anarcho-Tyranny’ – Cryptonews

Last updated: June 30, 2024, 05:00 EDT | 2 min read

Ethereum co-founder Vitalik Buterin has voiced his frustration regarding the current state of cryptocurrency regulations and proposed a potential solution to address the issue.

Buterins remarks came in response to a user on Warpcast, a social media platform built on the Farcaster protocol, where he highlighted the challenges faced by cryptocurrency developers due to existing regulatory efforts.

Buterin pointed out a perplexing phenomenon in crypto regulation, particularly in the United States, where projects that offer vague references to potential returns or engage in activities devoid of utility can operate without repercussions.

However, when developers attempt to provide clear explanations of returns or promise certain rights to their customers, they are often penalized for being categorized as securities.

Buterin referred to this situation as anarcho-tyranny, emphasizing that this gradient of incentives is more detrimental to the industry than anarchy or tyranny alone.

The prevalence of bad actors, scammers, and baseless hypesters on social media and sharing platforms contributes to the anarchic side of the industry.

To combat this, Buterin previously proposed three recommendations aimed at addressing the problem of useless cryptocurrency products and services.

These suggestions include limiting leverage, implementing audits and transparency measures, and introducing knowledge tests to regulate usage.

The practical implementation of cryptocurrency knowledge tests at a regulatory level or in individual and corporate settings remains uncertain.

However, it is likely that policy measures will be necessary to impose limitations on leverage within cryptocurrency projects and establish requirements for auditing and transparent reporting.

Unfortunately, the sentiment within the cryptocurrency community suggests that the United States has a disproportionately high number of cryptocurrency users but lacks a clear and consistent approach to regulation.

Buterin said he prefers a regulatory environment that provides greater protections to companies and projects with long-term visions and plans.

He believes that issuing a token without a clear narrative explaining its long-term economic value should carry more risk.

Buterin also acknowledged that achieving a regulatory framework that benefits the cryptocurrency industry will require sincere engagement from both regulators and industry participants.

The collaboration between these two parties is crucial to fostering an environment that encourages innovation while safeguarding investors and users.

More recently, Buterin also voiced his concerns regarding overly complicated Layer 2 scaling solutions.

At the time, the Ethereum mastermind highlighted the potential risks associated with complex Layer 2 networks and urged for a more balanced approach in the development of blockchain ecosystems.

In the blockchain community, there is a prevailing belief that Layer 1 networks should prioritize simplicity to minimize the risk of critical bugs and attack vectors.

Consequently, the responsibility for handling more complex features falls upon Layer 2 networks, which are designed to provide scaling solutions.

These networks bundle transactions executed on a separate network and submit them in batches for validation on Layer 1, enhancing throughput and reducing transaction fees.

Continued here:

Ethereum Co-Founder Vitalik Buterin Decries State of Crypto Regulations as 'Anarcho-Tyranny' - Cryptonews

Read More..

Harsh Criticism from Ethereum Inventor: Anarcho-Tyranny! – Kriptokoin.com

Ethereum co-founder Vitalik Buterin harshly criticized crypto regulations in the US. Buterin called this situation Anarcho-Tyranny. Buterin said that this contradictory approach hinders real innovation. He also argued that it allows less credible projects to flourish. Elsewhere, Buterin explained his proposal to improve Ethereums transaction confirmation times.

So, what does anarcho-tyranny mean? Simply put, it describes a situation where the government does not enforce laws against harmful activities but imposes excessive rules on honest, law-abiding citizens. This creates a confusing and flawed regulatory environment. Vitalik Buterin says that the current regulatory system allows projects with vague promises to operate freely. On the other hand, those that offer clear and transparent information face strict regulations and are often classified as securities. Buterin argues that this inconsistency discourages real innovation in the industry.

Vitalik Buterins call for change comes at a time of intensified regulatory activity by the SEC under Chairman Gary Gensler. As you have been following from Kriptokoin.com, the SEC has filed several lawsuits seeking to classify various cryptocurrencies as securities. Most recently, the SEC targeted ConsenSys, accusing the MetaMask wallet of violating securities laws with its clearing and staking services. This aggressive approach drew criticism from industry groups such as the Blockchain Association and the Crypto Freedom Alliance of Texas, who argued that the SEC exceeded its authority.

Vitelik Buterin proposes a different regulatory approach. While he supports projects with transparent and credible plans, he suggests making it riskier to issue tokens without a clear long-term value proposition. He believes that creating a balanced regulatory framework requires cooperation between regulators and the crypto industry. Ultimately, Buterins call for balanced regulation highlights the need for a more thoughtful approach that encourages innovation while protecting investors.

Meanwhile, Vitalik Buterin penned an article in which he proposed to improve Ethereums transaction confirmation times. Currently, Ethereum uses the Gasper consensus mechanism, which has a complex structure and confirmation times of around 12.8 minutes. To solve this problem, Buterin proposes the SSF (Slot-Slot-Finality) mechanism inspired by the Tendermint consensus algorithm. This approach aims to finalize a block before the next block is created, significantly reducing confirmation times. A key feature of SSF is the inactive leakage mechanism, which ensures network stability and security during periods of low activity or participation.

By implementing the SSF mechanism, Ethereum can offer faster transaction confirmations and improve the user experience. It could also make it more competitive with other Blockchain platforms. Moreover, this development could increase the efficiency and overall performance of the network, attracting the interest of both developers and users.

Follow us on Twitter, Facebook and Instagram, and join our Telegram and YouTube channelto stay up to date with breaking news !

See the original post here:

Harsh Criticism from Ethereum Inventor: Anarcho-Tyranny! - Kriptokoin.com

Read More..

Vitalik Buterin says crypto regulations have created anarcho-tyranny – StartupNews.fyi

The Ethereum co-founder says wed all be better off with either anarchy or tyranny but not both.

Source link

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. Its possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More here:

Vitalik Buterin says crypto regulations have created anarcho-tyranny - StartupNews.fyi

Read More..

Using artificial intelligence to improve efficiency, accuracy in agriculture is the topic of a meeting tonight in Burke – Drgnews

JUNE 26, 2024:

Researchers have been collecting data for centuries. Now, artificial intelligence is able to learn some of that information and use it for modern day applications in agriculture.

Austin OBrienis an Associate Professor of Computer Science at Dakota State University in Madison, South Dakota. He says many colleges and universities use their own test crops and herds as a source of information and its time to put all of the research and data collection to work.

OBrien says researchers are also looking into applications for AI use in animal production.

While AI may be a comparatively new tool, OBrien says keeping information safe is something theyve been doing at Dakota State for many, many years.

OBrien says the use of AI could create even greater accuracy for things such as meeting nutrition requirements in livestock feed rations or application of crop protection products in fields. Hear much more from OBrien in this DRG Media Group Agriculture In-depth podcast.

.

JUNE 25, 2024:

The impact of Artificial Intelligence in agriculture will be discussed during a public event tonight (June 25, 2024) at the Community Building in Burke (822 Main Street).

Austin OBrien, an Associate Professor of Computer Science at Dakota State University in Madison is the keynote speaker. He says agriculture researchers have been collecting data for AIs to use for over a decade now.

As for where artificial intelligence is getting information from to learn, OBrien says many universities and colleges use their own test crops and herds. He says what they could really use is data from farmers and ranchers.

As humans do research and learn more about cancer, for example the methods and medications used to treat it tend to improve in efficiency. To work properly, OBrien says AIs also need access to continually updated information.

The event in Burke is hosted by the South Dakota Farm Bureau and its local county affiliates as part of the organizations Summer Event Series. It begins at 6:30pm and includes a complimentary meal.

Go here to see the original:

Using artificial intelligence to improve efficiency, accuracy in agriculture is the topic of a meeting tonight in Burke - Drgnews

Read More..

ASU launching Doctor of Information Technology degree program in fall 2024 – Full Circle

Technology is ever changing and leadership must evolve to keep pace with advancements across high-tech fields. The rise in artificial intelligence, or AI, is creating new challenges in many sectors. In an era of great change, those in decision-making roles must develop the skills to manage uncertainty and disruption. Effective leaders must also be able to blend technological acumen with soft skills, creating work environments where their teams collaborate successfully.

To prepare for this future, The Polytechnic School, part of the Ira A. Fulton Schools of Engineering at Arizona State University, will launch a Doctor of Information Technology program this fall.

Over the next few years The Polytechnic School will launch several graduate programs to develop leaders who can combine deep technical know-how with broad principled know-why. We are starting with a doctor of information technology because the faculty understand the demand for this training, but also because IT is everywhere. I cant imagine a better academic opportunity for a new generation of tech executives to benefit so many people, says Kurt Paterson, director of The Polytechnic School.

This professional doctorate, which focuses on developing solutions to practical problems rather than on academic research, will be geared toward working professionals who already have a masters degree in a related field, substantial professional experience, and are looking to advance their career and seek new leadership opportunities. The degree program is designed to leverage students work experience to help enhance their management skills through academic theory and applied engineering.

This doctoral program focuses on combining applied research, industry practices and leadership to innovate solutions to complex multifaceted challenges. The emphasis on leadership, principled innovation and industry experience is a key difference between this program and other doctoral programs, says Tatiana Walsh, an assistant teaching professor and chair of the information technology program in The Polytechnic School.

Derex Griffin, an assistant teaching professor of information technology in The Polytechnic School, says the program is ideal for students already dedicated to their work in the information technology field and who see themselves as transformative leaders in an industry where leadership is especially consequential.

Weve built a comprehensive, hands-on experience to address the leadership challenge that comes with being a part of the next generation of information technology innovators, says Griffin.

There are key aspects of The Polytechnic Schools new doctorate degree that set it apart from other programs.

This is the first program of its kind in the state of Arizona, which aligns with ASUs forward thinking and innovation. Students are given the opportunity to do practical research to develop best practices that can be immediately applied in the workplace. This prepares them to address todays information technology challenges and help to advance this industry.

Students will obtain experiences in cloud computing, artificial intelligence, cyber security and data science, Walsh says. Graduates will become technology leaders who have the skills required to lead enterprise initiatives that innovate highly reliable communications networks, implement resilient security global systems and leverage data analytics in daily big data business decisions.

Another advantage offered by the program is that students will be able to complete their studies in three years, which makes the degree more cost-effective than alternatives. In addition, the atmosphere of The Polytechnic Schools atmosphere is familial in nature, which contributes to a successful learning environment. The schools programs tend to foster long-term professional relationships that can lead to networking that benefits students in their future careers.

Upon graduation, students will have opportunities to advance in careers such as senior technology managers, consultants, researchers and chief information officer in fields like business, education, government and non-profit organizations.

Students develop the engineering skills and professional competencies in collaboration, communication, agility, ethics and teamwork to advance their leadership career, says Walsh.

Appropriate undergraduate degrees completed by students that will render them successful in this program include information technology, information sciences, cybersecurity, computer science, computer systems engineering, computer engineering and software engineering.

To be awarded the Doctor of Information Technology, students must complete 60 credit hours, including an applied project course, and meet the requirements of both ASUs Graduate College and the Fulton Schools.

Our program will help professionals to step into advanced technical leadership roles to lead initiatives that create principled solutions for real-world needs, Walsh says.

The application is now open for fall 2024 and spring 2025 semesters.

Read this article:

ASU launching Doctor of Information Technology degree program in fall 2024 - Full Circle

Read More..

Computer Scientists Find New Security Loophole That Allows Spying on Internet Users – Tech Times

Computer scientists at Graz University of Technology have uncovered a new security vulnerability that could enable unauthorized surveillance ofinternet users' online activities.

According to the researchers, this loophole, known as "SnailLoad," bypasses conventional data protection measures such as firewalls, VPN connections, and browser privacy modes.

Unlike traditional methods involving malware or data interception, SnailLoad exploits fluctuations in internet connection speed to track users' actions online. This method does not require deploying malicious code, making it particularly insidious.

The researchers at the Institute of Applied Information Processing and Communication Technology (IAIK) noted that the vulnerability hinges on monitoring the latency variations that occur during the transfer of seemingly innocuous files from attacker-controlled servers to unwitting victims.

Stefan Gast from IAIK explained that every online activity leaves a distinct latency pattern similar to a unique fingerprint. When a user interacts with websites, watches videos, or engages in video calls, the latency fluctuations pattern reflects the content being accessed.

By analyzing these patterns, attackers can reconstruct the sequence of a victim's online actions. The technique involves pre-collecting latency fingerprints of popular websites and YouTube videos.

The researchers identified users' activities during experiments by correlating observed latency patterns with the stored fingerprints. This approach effectively circumvents conventional security measures blind to such subtle indicators.

Daniel Gruss, another researcher involved in the study, noted that the success rate of the SnailLoad attack can be remarkably high, especially under conditions where internet speeds are slower and data volumes are higher.

Read Also:Change Healthcare Begins Notifying Hospitals, Customers Who Had Data Exposed in Massive Cyberattack

During tests, the researchers achieved up to 98% accuracy in identifying activities like video consumption, albeit with slightly lower success rates for simpler web browsing activities.

Gruss warned that the loophole is exceptionally difficult to mitigate. Traditional fixes like software updates or firewall adjustments are ineffective against a method that exploits inherent properties of internet traffic dynamics.

According to Gruss, the potential countermeasure of deliberately slowing down internet connections in a randomized manner could degrade the user experience, particularly for real-time applications such as video conferencing and online gaming.

The implications of SnailLoad extend beyond individual privacy concerns to a broader impact on cybersecurity strategies. The Graz University team advocates for heightened awareness and research into more robust defenses against such sophisticated tracking methods.

Gast and Gruss' team has established awebsiteoutlining SnailLoad. They plan to present theirscientific paperon this vulnerability at the upcoming Black Hat USA 2024 and USENIX Security Symposium conferences.

Related Article:Indonesia Shuts Down Over 2 Million Illegal Online Gambling Websites, Exposing Deadly $2 Billion Underground Industry

2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.

See the original post here:

Computer Scientists Find New Security Loophole That Allows Spying on Internet Users - Tech Times

Read More..

Platform as a Runtime – The Next Step in Platform Engineering – InfoQ.com

Key Takeaways

Many companies turn to platform engineering to help scale their development teams and increase developer experience for engineer efficiency. However, platform engineering usually stops at the CI/CD pipeline. As systems become larger and more complex we need to take the concepts of platform engineering to a higher level to the code level by creating platforms and abstractions that will reduce cognitive load, help simplify and accelerate software development, and allow for easy maintenance and upgrades to the platform. This will reduce cross-company tasks like fixing the infamous Log4J security vulnerabilities. And while we are at that, let's see if we can also reduce our cloud cost by reducing the footprint of each microservice. Lets move from "platform" to "Platform as a Runtime".

Large and complex systems can hinder a company's ability to innovate and adapt quickly. These systems often demand that developers address large amounts of information and concerns, leading to cognitive overload. As an engineering manager, I've witnessed this firsthand. New feature development, regardless of its size, can be significantly slowed by the need to address a multitude of cross-cutting concerns, such as network contracts, regulations, and various non-functional requirements that exist alongside core business needs. This is especially true for Wix, you see, Wix is an open platform that exposes many APIs to 3rd party developers, being such all the services need to work in the same way, we have many guidelines on how to build a service and what are the best practices to handle scale and be part of the ecosystem.

For instance, a software platform may have a requirement that every database change operation needs to send a domain event. The fact that developers need to remember to define the domain event message and implement it on every DB operation, adds to the cognitive load, time, and complexity of their feature. In every system there are many additional requirements such as the system may require support for multiple languages and currencies, ensuring compliance with GDPR regulations, handling "delete" notifications, implementing best practices such as optimistic locking or having a version field in addition to last update date on every database schema, integrate with other sub-systems like IAM or other components of the ecosystem they are part of. This ever-growing list of considerations and "best practices", can significantly impede the release cycle, especially as system complexity increases.

Usually, software systems start small, but as they progress and become larger they become complex systems with intricate dependencies making it harder to understand how changes in one part might affect another.

Software systems are getting increasingly large and distributed across multiple servers and cloud components. Managing and maintaining these distributed systems adds another layer of complexity. Each component and feature has its own best practices and requires special knowledge. For instance, to send a domain event you need to understand how to use Kafka, to learn its APIs, its delivery guarantees (at least once) and what are the best practices of using Kafka.

The same goes for databases like MySQL or MongoDB, search engines like Elastic Search, and even other internal services that you integrate with like your feature flags system. Basically, you need to understand and learn how to best use every component you use in the best way possible.

Another point that contributes to the complexity is the lack of a standard way of developing software across teams and developers. For instance, one developer may define a database schema with a primary key as UUID, while another as Long. One developer may implement the GDPR features for "delete" and "get my data", while another that was under pressure from the business to release features quickly only develops "get my data" but without the "delete" functionality. There can even be different implementations, for instance, one developer implements "GDPR delete" as a hard delete, another as a "soft delete", and a 3rd as data anonymization without actually deleting the records. While these all might be valid solutions, when someone (i.e. the legal team) asks how you implement a GDPR delete, the answer should probably not be "it depends". Systems should behave in a predicted and consistent way.

It is almost impossible to ensure that all developers 100% comply with all the system's non-functional requirements. Even a simple thing like input validations may vary between developers. For instance, some will not allow Nulls in a string field, while others allow Nulls, causing inconsistency in what is implemented across the entire system.

Usually, the first step to aligning all developers on best practices and non-functional requirements is documentation, build and lint rules, and education. However, in a complex world, we cant build perfect systems. When developers need to implement new functionality, they are faced with trade-offs they need to make.

In many cases, we look at trade-offs in and between three pillars:

Code - When choosing how to build our system, for instance choosing between writing code in a monolith or a microservice, we face several concerns that may affect our decision. How easy it is to understand the existing code and the domain(s), can we break an API and what will be the effect on the system, how easy it is to refactor code and test and how can we scale our engineering org so multiple teams can work without or with as little dependency on other teams when writing their own features.

Deploy - In this pillar we make trade-offs in relation to the release lifecycle, i.e, can multiple teams release new versions of their code to production whenever they want. How easy and quick the deployment process is. What are the risks with each deployment (the more code you deploy, the greater the chance is for a bug). Another thing to consider is keeping backward compatibility and breaking APIs. In a monolith for example it is easy to refactor and break an (internal) API because you have control over the entire code base, as opposed to a microservices environment where breaking an API can cause unexpected incidents due to its distributed nature.

Run - In this pillar we consider the operational aspects of our system. What are the performance requirements, and how easy is it to scale parts of the system? When we run on production, how easy it is to understand (monitor) the system. In case of an incident, can we quickly find the owner of the part of the system that fails?

While documentation is a necessary step to define how we would like to develop software and what are the recommended best practices, in reality, developers have a lot of freedom to choose what and how to implement them. Multiple teams will have different internal libraries that implement parts of the guidelines and system contracts in different ways.

These varieties in implementations create ever increasing technical debt on the system as a whole, since every change in a cross-system requirements will need multiple teams to make changes, different bugs to be fixed in different implementations that basically do the same thing. Not so long ago we had the Log4J vulnerabilities that required almost every team to work on a fix. Making sure that 100% of the code base is fixed was a tremendous task.

Complex environments demand standardized coding practices.

While defining these standards and consolidating technology stacks are crucial, simply documenting them isn't enough. As I mentioned earlier, too much documentation can overload developers with information.

The solution lies in codification. We can translate these standards, guidelines, and best practices into an opinionated development platform. What we need to provide is a coding platform that developing within, will automatically take care of most of the system's cross-cutting concerns and will make it very easy for developers to code within the guidelines, basically creating a golden path to quick product feature development.

For example, encryption of PII fields. The platform should automatically handle encryption and decryption of the fields without the developer needing to learn, understand, and even use the encryption library. For instance, just by annotating a field as @PII, the platform would automatically encrypt and decrypt the field as it is being written and read from the database, so developers dont even need to think about it in their code.

Since the cost of developing such a robust platform is very high, we try to limit as much as possible our software stack. Granting unrestricted freedom to deviate from the standard platform increases the system's complexity and maintenance burden, thus any divergence should be carefully evaluated considering the added complexity it introduces.

The need for standardization comes to mitigate scaling challenges. Microservices is another solution to try and handle scaling issues, but as the number of microservices grows, you will start to face the complexity of a Large-Scale Microservices environment.

In distributed systems, requests may fail due to network issues. Performance is degraded since requests flow across multiple services via network communication as opposed to in-process method calls in a Monolith. Monitoring the system becomes harder since calls are distributed across multiple services. Security becomes a bigger issue because, with every microservice we add, we increase the attack surface. And lets not forget the human factor: It becomes harder to maintain standards, quality, and protocols across multiple teams and services.

These are the obvious shortcomings, but hidden issues that we encounter in large-scale systems are cost and maintainability. Let me explain:

When writing a microservice, you usually use some kind of framework like Spring, you also have all your internal libraries and dependencies that you need, for instance, logging libraries and JDBC drivers that you build and package into your microservice. What that means is that over 90% of the code that runs in a microservice are actually the frameworks and libraries you package and deploy. The business logic you actually write in each microservice is less than 10% of the code, at best, depending on the size of the microservice. In many cases, we even saw that the business logic is less than 1% of the code that is packaged within a microservice.

All this code is duplicated and deployed hundreds and thousands of times in your production environment, increasing the footprint with every new microservice. This, in turn, increases your cloud cost and makes it harder to align the different frameworks and library versions.

At Wix, we operate over 4000 clusters of microservices, which is causing us some pain. So, we tried to mitigate these issues. We approached this problem by building Platform as a Runtime (PaaR).

To analyze the problem domain, we looked at how developers write code and chose the technology stacks across three pillars: code, deployment, and runtime. We split the solution into two parts: Platform and Runtime.

The platform focuses on the developer experience, by codifying best practices, contracts, regulations, and most importantly integrations into the code middleware components of our production environment. Imagine it as a highly customized framework tailored to your company's specific needs. It handles non-functional requirements, reduces boilerplate code, and minimizes cognitive load. When developers work within the platform, things simply "work as expected."

We internally called this project "Nile" and its focus was to streamline software development and the goal was to bring the most value to the developers focusing on the developers experience.

This approach goes beyond traditional frameworks and platform engineering, we took platform engineering to the code level from the CI/CD level. Most companies offer frameworks that developers utilize, but they fall short of creating a platform that seamlessly integrates the framework with the organization's operational practices.

For instance, consider GDPR compliance. To fulfill a GDPR data deletion request, you typically subscribe to a Kafka topic and listen to "delete my data" requests. A basic framework might allow you to easily subscribe to the topic, but developers would still need to code the message processing and deletion logic. A robust platform, however, would automatically subscribe to the GDPR topic, process the message, and initiate data deletion from the database all without requiring additional developer intervention, the only thing a developer would need to do is to annotate the PII fields, and the platform would do the rest automatically.

The runtime component of PaaR focuses on optimizing service footprint and deployment strategy. Instead of bundling the entire platform and framework with your code artifact, the runtime holds the platform code and manages all network communication (incoming and outgoing). This eliminates the need to package the platform with every microservice, enabling independent release cycles of the platform separate from the "product" artifacts. Each deployed artifact simply connects to the runtime, resulting in a smaller service footprint, think of it like a runtime dependency as opposed to a build time dependency.

By reducing artifact size, PaaR allows for greater density within nodes. The footprint of a guest (i.e., your microservice) is reduced dramatically since it is not bundled with all the frameworks and common libraries. A single runtime host can efficiently serve multiple guest services, creating a virtual monolith.

In order to support a wide range of programming languages, we embarked on a 'Platform as a Runtime' initiative dubbed "SingleRuntime," which communicates with guest services using the gRPC protocol over a local network (localhost). This approach will enable us to develop in multiple languages while maintaining a unified platform.

While PaaR is still a work in progress, we have experienced significant success with Nile. The platform brings a lot of value to the developers, we managed to improve our internal developers velocity by 50%-80%. Our developers' experience has improved since they can now focus on building the business logic of their products, as opposed to spending a lot of time writing boilerplate code and implementing all the non-functional requirements, reducing the amount of code they have to write and test, and releasing products much faster than before.

The platform's impact is so significant that we as a company have decided it is worth it to rewrite all our legacy services (there are hundreds of legacy services) into Nile in the next year.

Another underrated benefit of adopting a single standard platform that does a lot of the heavy lifting for you is the improved product quality. Product developers are freed from repeatedly implementing non-functional requirements, as these are now provided by the platform and implemented according to best practices, in the most efficient way by the platform team. Additionally, any new feature added to the platform is automatically available and active on all services built within the platform, saving cross-company efforts.

One example is data locality. Only a couple of services supported data locality before we moved to Nile, but as soon as we developed data locality support into the Nile platform, in a single day, hundreds of services that did not support data locality before, now have support for it, and all this without involving any product developers. Once they were compiled with the new platform, they got the data locality support "for free". This saves the company hundreds of man weeks, had we wanted to support it without having a unified platform.

Developing a Platform as a Runtime (PaaR) solution is a substantial undertaking best suited for organizations facing significant scaling challenges. If your microservice environment is relatively small, in the low 100s of services, alternative, more cost-effective solutions for scaling might be preferable. You can start by enforcing standard libraries, maintaining rigid control over 3rd party dependencies, and building rules to enforce standards. One of the things we did at Wix, was to build a "generally available" (GA) enforcer that would force everyone to deploy their artifacts to production with the latest libraries and frameworks at least once every 2 weeks.

Once you scale to thousands of microservices, you could start building a platform.

For organizations ready to embark on a PaaR journey, my advice is to prioritize building the platform first. Focus on automating and streamlining the development process and integrations by taking platform engineering to a higher level of abstraction not just infrastructure, but also the software layer itself.

Focusing on business logic worked for us since the platform teams customers are our own product developers.

Building a platform involves a critical review of tens of thousands of lines of code. The way we approached it was to challenge ourselves with every line of code that we evaluated, we asked the question: "Does this line of code belong here?" The design goal of the platform is to isolate core business logic within the codebase and codify everything else into the platform, instead of the product service itself. As Steve Jobs once said: "The line of code thats the fastest to write, the line of code that never breaks, that doesnt need maintenance is the line you never had to write." As naive as it sounds our KPI was the number of lines of code, we aimed to reduce as much as possible the lines of code a product developer has to write which are not business logic related.

An important lesson we learned is that we needed the platform team to be in the right mindset. We had to have a Value-Driven platform team. While this can be its own topic, I would mention one thing we discovered: the most crucial KPI for the platform team is "developer adoption". If developers aren't using your platform, it might not be delivering real or enough value. This kind of thinking was crucial to the team, collaboration with product developers is very helpful in the adoption of the platform since product developers are always involved in defining the platform features, and capabilities and setting requirements that actually solve their real problems.

One last thing I would like to share: The path to achieving these goals was not easy. Aside from the technological challenges, there is also the human factor. Developers can be apprehensive of abstractions and unseen functionalities. In order to win the hearts and minds of developers, consistent communication about progress and ongoing education regarding the platform's inner workings are vital. This transparency demystifies the "magic" and empowers developers to debug and contribute effectively.

See the article here:

Platform as a Runtime - The Next Step in Platform Engineering - InfoQ.com

Read More..

RunLve Announces Groundbreaking Advances in AI-Powered Cloud Engineering – AccessWire

Revolutionizing Cloud Engineering with AI Innovations: RunLve Sets New Industry Standards

NEW YORK, NY / ACCESSWIRE / June 29, 2024 / RunLve, an international IT company founded in 2022, is thrilled to announce a series of groundbreaking innovations in the field of cloud engineering. These advancements promise to redefine industry standards and provide unprecedented benefits to businesses worldwide.

RunLve Introduces AI-Integrated Cloud Services

RunLve has developed a unique service model that leases AI servers and computing power to companies of all sizes. This model allows businesses to access high-performance AI servers hosted in the cloud without the need for significant hardware investments. By doing so, RunLve democratizes access to advanced computing resources while reducing environmental impact.

New AI-Driven Infrastructure for Real-Time Data Processing

RunLve's AI-driven cloud infrastructure integrates advanced AI algorithms directly into its services, enabling real-time data processing and enhanced performance optimization. This capability is critical for customers involved in AI model training and complex computing tasks. RunLve's infrastructure is designed to efficiently handle the most demanding computing tasks with minimal impact on the environment.

Proprietary Technology Optimizes Resource Allocation

RunLve has developed proprietary technology that optimizes resource allocation and energy consumption. This technology dynamically adjusts computing resources based on workload requirements, improving operational efficiency and contributing to substantial energy savings.

Market Impact and Future Plans

RunLve's innovative approach has been well-received in the market, with a growing number of businesses opting for their AI server leasing solutions. The company's services are particularly beneficial for SMEs, offering flexibility and scalability that were previously unattainable due to high infrastructure costs.

Looking ahead, RunLve is committed to further R&D, with plans to enhance cloud security and introduce next-generation AI capabilities. RunLve's goal is to continue to be at the forefront of cloud engineering and push the boundaries of AI and cloud technology.

About RunLve:

RunLve is an international IT company specializing in cloud engineering, artificial intelligence, and data analytics. Founded in 2022, the company is a market leader in low-cost cloud GPU rental, providing sustainable, cost-effective AI server solutions to a diverse range of clients.

MEDIA CONTACT

RunLve [emailprotected] https://www.runlve.com/

SOURCE: RunLve

See the rest here:

RunLve Announces Groundbreaking Advances in AI-Powered Cloud Engineering - AccessWire

Read More..

Hayduk Engineering: from a one-man operation to thriving business – TBR News Media

By Serena Carpino

Hayduk Engineering recently celebrated the opening of its new office near MacArthur Airport.

The firms move became necessary because of the growth of the firm over the past 10 years. The addition of dozens of new employees resulted in the need for an expanded office space, leading to relocation from the former office space, Hayduks director of business development, Peter Scully, said. He is a former NYSDEC regional director and Suffolk County deputy county executive.

Stephen G. Hayduk, who founded the company in 1984, began as a one-man operation from his house. Over the past 40 years, his firm has grown to 42 employees in what can be described as a rags to riches story.

After dropping out of Deer Park High School, Hayduk worked as a truck driver on Long Island before enlisting in the U.S. Army Aviation School at 18 years old. He spent three years serving in the Vietnam War as a member of the U.S. Army 57th Assault Helicopter Company.

According to Hayduk, Vietnam motivated him to improve himself. I was a good student but I was interested in other things, not college. Vietnam helped me with my maturity and to put things back into perspective. When I came back, I used the GI Bill to go to college. So, even if it was in many ways a negative experience, I turned it into a positive one, he said.

Hayduk returned to Long Island in 1971 and soon enrolled in Suffolk County Community College, earning his associate degree in 1974. He furthered his education at Rochester Institute of Technology, graduating in 1976 with high honors and obtaining a bachelors degree.

Over the next several years, Hayduk gained experience at local engineering firms, even becoming the chief engineer of a 75-employee firm. At the same time, he taught courses in the engineering technology division at SCCC.

In 1984, he founded Hayduk Engineering. That same year, he earned the Young Engineer of the Year award from the New York State Society of Professional Engineers and the Suffolk County Chapter of the NYSSPE.

The NYSSPE Suffolk County Chapter awarded him the 1990-91 Engineer of the Year. In 1991, Hayduk helped with the construction of the Suffolk County Vietnam Veterans Memorial, located atop Bald Hill. He described this experience as a very emotional time with thousands of people there. It inspired him to have a reunion and, two years later, Hayduk met with 75 members of his unit and a Green Beret in Florida.

In 1992, Suffolk County Executive Robert Gaffney (R) appointed Hayduk as the commissioner of the countys Department of Public Works. Hayduk served in this position for five years, during which time he shut down his own firm.

As commissioner, Hayduk held many responsibilities, including the maintenance of county buildings, highways, bridges and waterways. In addition, he oversaw hazardous and toxic waste removal, county fleet maintenance and other related tasks. He also served as the chairman of the Suffolk County Sewer Agency, Dredging Project Screening Committee and the Space Management Steering Committee.

In 1997, Hayduk decided to return to operating Hayduk Engineering. He managed the firm as a sole proprietorship before turning it into an LLC in 2002.

Years later, in 2015, Hayduk Engineering was certified by New York State as a Service-Disabled Veteran-Owned Small Business.

Hayduk Engineerings status as both a New York State Certified Service-Disabled Veteran-Owned Business and federally Certified Service-Disabled Veterans Business Enterprise has contributed significantly to the growth of the firm since 2016, Scully said.

Today, the firm is run by Hayduk and his son, Stephen. They provide many services, including civil and site engineering, highway engineering, water and wastewater engineering, stormwater management and construction administration. The firms clients include both private sector customers, including developers and owners of private sewer plants and government agencies, including the Nassau County and Suffolk County departments of public works, the towns of Babylon, Brookhaven, Smithtown and Southampton, the New York State Thruway Authority, Port Authority of New York and New Jersey, New York State Department of Transportation, the Long Island Rail Road, Metropolitan Transportation Authority and the State University of New York, among others.

Hayduks work has been recognized with numerous awards, including several for the Town of Smithtown Callahans Beach project and for Best Engineering Firm courtesy of Long Island Business News.

The company is continually recruiting new staff and anticipates hiring additional employees in the months and years ahead, Scully said.

Read more here:

Hayduk Engineering: from a one-man operation to thriving business - TBR News Media

Read More..

Fehr Graham recognized again as Engineering News-Record Top 500 design firm – Monroe Times

ROCKFORD Fehr Graham, a leading Midwest engineering and environmental firm, again secured a spot on the Engineering News-Record (ENR) Top 500 Design Firms list, ranking No. 474. The firm ranked No. 494 in 2022 and 471 in 2023.

This marks the third consecutive year the firm has been featured on the prestigious list, which ranks the largest engineering consulting firms in the United States based on design service revenue for the previous year.

Fehr Graham President Mick Gronewold is happy with the firms steady recognition.

Every day, our team is committed to solving problems for clients, Gronewold said. Being named to the ENR list again doesnt just recognize our firms capabilities it celebrates the positive change we make in the communities where we operate and where our employees call home.

Founded on Sept. 15, 1973, in Freeport, Illinois, by Al Fehr and Joe Graham, Fehr Graham has expanded from a single office with five employees to 15 offices across three states with 230 people. This growth includes a recent merger with Midwest Engineering Associates on April 1, and a strategic partnership with Trilon Group announced in May 2023.

This recognition not only reflects Fehr Grahams growth but the dedication and expertise of our employees who drive our success, Gronewold said. Their commitment to clients is all part of how were able to serve a larger footprint and bring our small firm feel with big firm capabilities to new areas. Its a tribute to Al and Joes original vision.

Fehr Graham partners with private and public entities to design and advance projects ranging from water and wastewater treatment plants to roads, bridges and highways. Our team also designs stormwater solutions, sanitary sewer collection systems, water distribution, production and storage facilities, parking lots, residential and commercial developments, parks and much more.

Midwest Engineering Associates joins Fehr Graham

EAST PEORIA, Ill. Midwest Engineering, a 34-person engineering firm based in East Peoria, merged with Fehr Graham.

Midwest Engineering co-owners, Robert Culp, PE, CFM, and David Horton, PE, chose Fehr Graham because the two firms share a focus on providing value and innovative solutions to clients problems.

We wanted a partner with similar values and business acumen, Culp said. Fehr Graham has grown over the years but has maintained a small firm feel by being attentive to clients, which resonated with our approach to client business.

For Fehr Graham, Midwest Engineerings established structural and transportation engineering services combined with expertise in mechanical and electrical engineering align with strategic growth objectives. Fehr Graham President Mick Gronewold, PE, appreciates how well the strengths of the people from both firms complement each other.

When we saw the level of services and expertise Midwest Engineering brought to the table, we knew they would be a perfect addition to our team, Gronewold said. Their culture fits seamlessly with ours, and together, were stronger.

During the transition, Midwest Engineering will be known as Midwest Engineering Associates, a Fehr Graham company. Together with 240 employees, the merger promises clients a broader spectrum of professional resources.

Joining Fehr Graham opens doors to a large pool of expertise that will significantly benefit our clients, Horton said. Its about providing a richer, more comprehensive service experience.

Staff at Midwest Engineering Associates, a Fehr Graham company, will continue to operate from their respective offices in East Peoria and Peoria, Illinois, and Coralville, Iowa, ensuring uninterrupted service for clients.

Having served the Peoria region for 14 years, Midwest Engineering specializes in civil/site transportation, hydraulic, structural, construction engineering, mechanical, electrical, plumbing engineering, professional land surveying and landscape architecture services.

Midwest Engineering Associates, a Fehr Graham company remains committed to its clients and community from its offices at 140 E. Washington St., East Peoria, Illinois; 1904 NE Monroe St., Peoria, Illinois; and 2000 James St., Suite 213, Coralville, Iowa.

Read the original here:

Fehr Graham recognized again as Engineering News-Record Top 500 design firm - Monroe Times

Read More..