Illumina Innovates with Rancher and Kubernetes
[Rancher is a complete container management solution, and to be a complete platform, we’ve placed careful consideration into how we handle networking between containers on our platform. So today, we’re posting a quick example to illustrate how networking in Rancher works. While Rancher can be deployed on a single node, or scaled to thousands of nodes, in this walkthrough, we’ll use just a handful of hosts and containers.]
Setting up and Launching a Containerized Application [Our first task is to set up our infrastructure, and for this exercise, we’ll use AWS.
Today Docker acquired SDN software maker SocketPlane. Congratulations to both Docker and SocketPlane teams. We have worked closely with SocketPlane team since the early Docker networking discussions and have a great amount of respect for their technical abilities. We are also happy to see Docker Inc. make a serious effort to bring SDN capabilities to the Docker platform. Many customers have told us that the lack of multi-host networking is one of the last remaining gaps that impede the wide-spread production use of Docker containers.
In addition to managing container networking across cloud providers, we are excited to announce the following features in Rancher v0.2. First up, the team has exposed the building blocks for storage management.
Almost one year ago I started Stampede as an R&D project to look at the implications of Docker on cloud computing moving forward, and as such I’ve explored many ideas. After releasing Stampede, and getting so much great feedback, I’ve decided to concentrate my efforts. I’m renaming Stampede.io to Rancher.io to signify the new direction and focus the project is taking. Going forward, instead of the experimental personal project that Stampede was, Rancher will be a well-sponsored open source project focused on building a portable implementation of infrastructure services similar to EBS, VPC, ELB, and many other services.
Since I started playing with Docker I have been thinking that its network implementation is something that will need to be improved before I could really use it in production. It is based on container links and service discovery but it only works for host-local containers. This creates issues for a few use cases, for example when you are setting up services that need advanced network features like broadcasting/multicasting for clustering.
CNI, or container network interface, is a standard system for provision networking for containers, especially for multi-host orchestrators like Kubernetes. In this article, we'll describe what CNI is, why it's helpful, and then compare some popular CNI plugins for establishing the network for Kubernetes containers.
Rancher Server has recently added Docker Machine support, enabling us to easily deploy new Docker hosts on multiple cloud providers via Rancher’s UI/API and automatically have those hosts registered with Rancher. For now Rancher supports DigitalOcean and Amazon EC2 clouds, and more providers will be supported in the future. Another significant feature of Rancher is its networking implementation, because it enhances and facilitates the way you connect Docker containers and those services running on them.
Hello, my name is Alena Prokharchyk and I am a part of the software development team at Rancher Labs. In this article I’m going to give an overview of a new feature I’ve been working on, which was released this week with Rancher 0.16 - a Docker Load Balancing service. One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher’s UI or API.
This article is a continuation in a series on migrating from Rancher 1.6 to Rancher 2.0. It explores how to expose Kubernetes workloads publicly using port mapping in Rancher 2.0
Introduction Containers have become a popular way of packaging and delivering applications. Though the underlying technology had been available in the Linux kernel for many years, it did not gain the current widespread adoption until Docker came along and made this technology easy to use. Despite runtime isolation being one of the major advantages, containers working in isolation are often not very useful. Multiple containers need to interact with each other to provide various useful services.
A little over a month ago I wrote about setting up a Magento cluster on Docker using Rancher. At the I identified some short comings of Rancher such as its lack of support fot load-balancing. Rancher released support for load balancing and docker machine with 0.16, and I would like to revisit our Magento deployment to cover the use of load balancers for scalability as well as availability. Furthermore, I would also like to cover how the docker machine integration makes it easier to launch Rancher compute nodes directly from the Rancher UI.
In less than a week, over 24,000 developers, sysadmins, and engineers will arrive in Las Vegas to attend AWS re:Invent (Nov. 28 - Dec 2). If you’re headed to the conference, we look forward to seeing you there! We’ll be onsite previewing enhancements included in our upcoming Rancher v1.2 release:
Support for the latest versions of Kubernetes and Docker: As we’ve previously mentioned, we’re committed to supporting multiple container orchestration frameworks, and we’re eager to show off our latest support for Docker Native Orchestration and Kubernetes.
On July 25th, Luke Marsden from Weaveworks and Bill Maxwell from Rancher Labs led a webinar on ‘A Practical Toolbox to Supercharge Your Kubernetes Cluster’. In the talk they described how you can use Rancher and Weave Cloud to set up, manage and monitor an app in Kubernetes. In this blog, we’ll discuss how and why Weave developed the best-practice RED method for monitoring apps with Prometheus.
What is Prometheus Monitoring?
*Note: since this article has posted, we’ve released Rancher 1.2.1, which addresses much of the feedback we have received on the initial release. You can read more about the v1.2.1 release on Github. * I am very excited to announce the release of Rancher 1.2! This release goes beyond the requisite support for the latest versions of Kubernetes, Docker, and Docker Compose, and includes major enhancements to the Rancher container management platform itself Rancher 1.
Last week we introduced our new project, Rancher.io, at AWS Re:Invent, and it was amazing. We’d been working on the software for months, talking with good friends, old customers and former colleagues about what we were building and wondering how it would be received by users. We were anxious to share it with new people and eager to get their feedback. We were also really nervous. Four of us flew out to Vegas, set up our little booth, tested our demos and organized our piles of stickers and t-shirts.
On April 29th, Shannon Williams and Darren Shepherd hosted an online meetup to talk about deploying microservices based applications using Docker Compose and Rancher. The session included demonstrations of how to build a Docker Compose file, and how to use Rancher’s upcoming services capability to deploy, scale and manage docker environments. The first hour of the video includes overview content and the demonstrations. The rest of the recording are questions from the attendees.
In the world of containers, Kubernetes has become the community standard for container orchestration and management. But there are some basic elements surrounding networking that need to be considered as applications are built to ensure that full multi-cloud capabilities can be leveraged.
The Basics of Kubernetes Networking: Pods The basic unit of management inside Kubernetes is not a container—It is called a pod. A pod is simply one or more containers that are deployed as a unit.