The software development landscape can be daunting: it’s crowded, and filled with myriad tools, techniques and approaches to building applications. Software application design naturally evolves alongside the web, and the landscape is only getting busier as the scale and speed of the web continue to grow at an incredible rate.
Jenkins is one of the most popular open-source continuous integration and continuous delivery servers available today. It began as a product called Hudson, developed at Sun Microsystems in 2004-2005, before it was forked from Hudson and renamed Jenkins in 2011, as the result of a dispute between the Hudson community and Oracle. Kohsuke Kawaguchi, the creator of Hudson/Jenkins became the Chief Technical Officer for Cloudbees in 2014 and Cloudbees now commercially offers Jenkins as a cloud solution.
Continuous integration made integration a non-issue and brought us to the point where we always have a set of working and tested code that is ready to be deployed to production. Continuous Delivery and Continuous Deployment take the next step.
The DevOps world has matured dramatically in the past few years, enabling us to reduce development release cycles and iterate much more quickly, which has led to more rapid feature delivery and innovation. Over a decade ago we were introduced to a development practice called Continuous Integration, in which a server application automated the task of checking source code out from a source code repository, building it, and testing it, when developers check in code. Continuous Integration served us well and established the foundation for the next step in automating our build and deploy process: Continuous Delivery.
This three part article series presents an overview of Continuous Integration, Continuous Delivery, and Continuous Deployment, and introduces Jenkins as a build tool that enables all three.
One if the biggest challenges in the IT industry can be the overwhelming use of buzzwords, acronyms, and nomenclature that often leaves us confused as readers, and as writers.
Among many of the things that we’ve seen coming out of the Microsoft camp in the last couple of years, is the opening up of many of the platforms that were traditionally very closed. We are talking about the introduction of mainstream Microsoft properties in non-Microsoft platforms, not least of which is the latest with the PowerShell on multiple operating systems.
Given growth and industry consolidation of bigger market participants there is no margin for error in eCommerce applications, especially for smaller players. Customers expect responses within tens of miliseconds. These complex requirements can only be managed by understanding the architecture of a robust enterprise ecommerce website and its bottlenecks.
One of the most difficult tasks that a system administrator has to face during an application outage is getting to a root cause analysis. Most of the traditional data center applications were built in a monolithic design, often with data and application all-in-one. Even in a situation where there was a shared data layer, the use of message queuing wasn’t employed often because there was still a single application fronting the back-end database.
The first articles in this series introduced Apache Spark, presented the traditional flow of a Spark applications, and reviewed the components that make Spark work and then we reviewed Spark’s distributed architecture to better understand how it operates across a cluster of machines and walked through setting up a Spark local working environment.
The last articles (part 1, part 2) introduced Apache Spark, presented the traditional flow of a Spark applications, and reviewed the components that make Spark work. In this article we are going to look at Spark’s distributed architecture to better understand how it operates across a cluster of machines and then we’re going to walk through setting up a Spark local working environment.