Cloud-native developer. Distributed systems wannabe. DevOps and continuous delivery. 10x troublemaker. DevOps Manager at VHT.
9382 stories
·
1 follower

How to set up ASP.NET Core 2.2 Health Checks with BeatPulse's AspNetCore.Diagnostics.HealthChecks

2 Shares

Availability TestsASP.NET Core 2.2 is out and released and upgrading my podcast site was very easy. Once I had it updated I wanted to take advantage of some of the new features.

For example, I have used a number of "health check" services like elmah.io, pingdom.com, or Azure's Availability Tests. I have tests that ping my website from all over the world and alert me if the site is down or unavailable.

I've wanted to make my Health Endpoint Monitoring more formal. You likely have a service that does an occasional GET request to a page and looks at the HTML, or maybe just looks for an HTTP 200 Response. For the longest time most site availability tests are just basic pings. Recently folks have been formalizing their health checks.

You can make these tests more robust by actually having the health check endpoint check deeper and then return something meaningful. That could be as simple as "Healthy" or "Unhealthy" or it could be a whole JSON payload that tells you what's working and what's not. It's up to you!

image

Is your database up? Maybe it's up but in read-only mode? Are your dependent services up? If one is down, can you recover? For example, I use some 3rd party back-end services that might be down. If one is down I could used cached data but my site is less than "Healthy," and I'd like to know. Is my disk full? Is my CPU hot? You get the idea.

You also need to distinguish between a "liveness" test and a "readiness" test. Liveness failures mean the site is down, dead, and needs fixing. Readiness tests mean it's there but perhaps isn't ready to serve traffic. Waking up, or busy, for example.

If you just want your app to report it's liveness, just use the most basic ASP.NET Core 2.2 health check in your Startup.cs. It'll take you minutes to setup.

// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
services.AddHealthChecks(); // Registers health check services
}

public void Configure(IApplicationBuilder app)
{
app.UseHealthChecks("/healthcheck");
}

Now you can add a content check in your Azure or Pingdom, or tell Docker or Kubenetes if you're alive or not. Docker has a HEALTHCHECK directive for example:

# Dockerfile

...
HEALTHCHECK CMD curl --fail http://localhost:5000/healthcheck || exit

If you're using Kubernetes you could hook up the Healthcheck to a K8s "readinessProbe" to help it make decisions about your app at scale.

Now, since determining "health" is up to you, you can go as deep as you'd like! The BeatPulse open source project has integrated with the ASP.NET Core Health Check API and set up a repository at https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks that you should absolutely check out!

Using these add on methods you can check the health of everything - SQL Server, PostgreSQL, Redis, ElasticSearch, any URI, and on and on. Just add the package you need and then add the extension you want.

You don't usually want your health checks to be heavy but as I said, you could take the results of the "HealthReport" list and dump it out as JSON. If this is too much code going on (anonymous types, all on one line, etc) then just break it up. Hat tip to Dejan.

app.UseHealthChecks("/hc",

new HealthCheckOptions {
ResponseWriter = async (context, report) =>
{
var result = JsonConvert.SerializeObject(
new {
status = report.Status.ToString(),
errors = report.Entries.Select(e => new { key = e.Key, value = Enum.GetName(typeof(HealthStatus), e.Value.Status) })
});
context.Response.ContentType = MediaTypeNames.Application.Json;
await context.Response.WriteAsync(result);
}
});

At this point my endpoint doesn't just say "Healthy," it looks like this nice JSON response.

{

status: "Healthy",
errors: [ ]
}

I could add a Url check for my back end API. If it's down (or in this case, unauthorized) I'll get this a nice explanation. I can decide if this means my site is unhealthy or degraded.  I'm also pushing the results into Application Insights which I can then query on and make charts against.

services.AddHealthChecks()

.AddApplicationInsightsPublisher()
.AddUrlGroup(new Uri("https://api.simplecast.com/v1/podcasts.json"),"Simplecast API",HealthStatus.Degraded)
.AddUrlGroup(new Uri("https://rss.simplecast.com/podcasts/4669/rss"), "Simplecast RSS", HealthStatus.Degraded);

Here is the response, cool, eh?

{

status: "Degraded",
errors: [
{
key: "Simplecast API",
value: "Degraded"
},
{
key: "Simplecast RSS",
value: "Healthy"
}
]
}

This JSON is custom, but perhaps I could use the a built in writer for a free reasonable default and then hook up a free default UI?

app.UseHealthChecks("/hc", new HealthCheckOptions()

{
Predicate = _ => true,
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});

app.UseHealthChecksUI(setup => { setup.ApiPath = "/hc"; setup.UiPath = "/healthcheckui";);

Then I can hit /healthcheckui and it'll call the API endpoint and I get a nice little bootstrappy client-side front end for my health check. A mini dashboard if you will. I'll be using Application Insights and the API endpoint but it's nice to know this is also an option!

If I had a database I could check one or more of those for health well. The possibilities are endless and up to you.

public void ConfigureServices(IServiceCollection services)

{
services.AddHealthChecks()
.AddSqlServer(
connectionString: Configuration["Data:ConnectionStrings:Sql"],
healthQuery: "SELECT 1;",
name: "sql",
failureStatus: HealthStatus.Degraded,
tags: new string[] { "db", "sql", "sqlserver" });
}

It's super flexible. You can even set up ASP.NET Core Health Checks to have a webhook that sends a Slack or Teams message that lets the team know the health of the site.

Check it out. It'll take less than an hour or so to set up the basics of ASP.NET Core 2.2 Health Checks.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     
Read the whole story
sbanwart
2 hours ago
reply
Akron, OH
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Podman and user namespaces: A marriage made in heaven

1 Share
Architecture and design planning layouts

Podman, part of the libpod library, enables users to manage pods, containers, and container images. In my last article, I wrote about Podman as a more secure way to run containers. Here, I'll explain how to use Podman to run containers in separate user namespaces.


read more
Read the whole story
sbanwart
2 hours ago
reply
Akron, OH
Share this story
Delete

Blog: Kubernetes Federation Evolution

1 Share

Authors: Irfan Ur Rehman (Huawei), Paul Morie (RedHat) and Shashidhara T D (Huawei)

Deploying applications to a kubernetes cluster is well defined and can in some cases be as simple as kubectl create -f app.yaml. The user’s story to deploy apps across multiple clusters has not been that simple. How should an app workload be distributed? Should the app resources be replicated into all clusters, or replicated into selected clusters or partitioned into clusters? How is the access to clusters managed? What happens if some of the resources, which user wants to distribute pre-exist in all or fewer clusters in some form.

In SIG multicluster, our journey has revealed that there are multiple possible models to solve these problems and there probably is no single best fit all scenario solution. Federation however is the single biggest kubernetes open source sub project which has seen maximum interest and contribution from the community in this problem space. The project initially reused the k8s API to do away with any added usage complexity for an existing k8s user. This became non-viable because of problems best discussed in this community update.

What has evolved further is a federation specific API architecture and a community effort which now continues as Federation V2.

Conceptual Overview

Because federation attempts to address a complex set of problems, it pays to break the different parts of those problems down. Let’s take a look at the different high-level areas involved:

Kubernetes Federation V2 Concepts

Kubernetes Federation V2 Concepts

Federating arbitrary resources

One of the main goals of Federation is to be able to define the APIs and API groups which encompass basic tenets needed to federate any given k8s resource. This is crucial due to the popularity of Custom Resource Definitions as a way to extend Kubernetes with new APIs.

The workgroup did arrive at a common definition of the federation API and API groups as ‘a mechanism that distributes “normal” Kubernetes API resources into different clusters’. The distribution in its most simple form could be imagined as simple propagation of this ‘normal Kubernetes API resource’ across the federated clusters. A thoughtful reader can certainly discern more complicated mechanisms, other than this simple propagation of the Kubernetes resources.

During the journey of defining building blocks of the federation APIs, one of the near term goals also evolved as ‘to be able to create a simple federation aka simple propagation of any Kubernetes resource or a CRD, writing almost zero code’. What ensued further was a core API group defining the building blocks as a Template resource, a Placement resource and an Override resource per given Kubernetes resource, a TypeConfig to specify sync or no sync for the given resource and associated controller(s) to carry out the sync. More details follow in the next section Federating resources: the details. Further sections will also talks about being able to follow a layered behaviour with higher level Federation APIs consuming the behaviour of these core building blocks, and users being able to consume whole or part of the API and associated controllers. Lastly this architecture also allows the users to write additional controllers or replace the available reference controllers with their own to carry out desired behaviour.

The ability to ‘easily federate arbitrary Kubernetes resources’, and a decoupled API, divided into building blocks APIs, higher level APIs and possible user intended types, presented such that different users can consume parts and write controllers composing solutions specific to them, makes a compelling case for Federation V2.

Federating resources: the details

Fundamentally, federation must be configured with two types of information: Which API types federation should handle Which clusters federation should target for distributing those resources. For each API type that federation handles, different parts of the declared state live in different API resources: A template type holds the base specification of the resource - for example, a type called FederatedReplicaSet holds the base specification of a ReplicaSet that should be distributed to the targeted clusters A placement type holds the specification of the clusters the resource should be distributed to - for example, a type called FederatedReplicaSetPlacement holds information about which clusters FederatedReplicaSets should be distributed to An optional overrides type holds the specification of how the template resource should be varied in some clusters - for example, a type called FederatedReplicaSetOverrides holds information about how a FederatedReplicaSet should be varied in certain clusters. These types are all associated by name - meaning that for a particular template resource with name foo, the placement and override information for that resource are contained by the override and placement resources with the same name and namespace as that of the template.

Higher level behaviour

The architecture of federation v2 API allows higher level APIs to be constructed using the mechanics provided by the core API types (template, placement and override) and associated controllers for a given resource. In the community we could uncover few use cases and did implement the higher level APIs and associated controllers useful for those cases. Some of these types described in further sections also provide an useful reference to anybody interested in solving more complex use cases, building on top of the mechanics already available with federation v2 API.

ReplicaSchedulingPreference

ReplicaSchedulingPreference provides an automated mechanism of distributing and maintaining total number of replicas for deployment or replicaset based federated workloads into federated clusters. This is based on high level user preferences given by the user. These preferences include the semantics of weighted distribution and limits (min and max) for distributing the replicas. These also include semantics to allow redistribution of replicas dynamically in case some replica pods remain unscheduled in some clusters, for example due to insufficient resources in that cluster. More details can be found at the user guide for ReplicaSchedulingPreferences.

Federated Services & Cross-cluster service discovery

kubernetes services are very useful construct in micro-service architecture. There is a clear desire to deploy these services across cluster, zone, region and cloud boundaries. Services that span clusters provide geographic distribution, enable hybrid and multi-cloud scenarios and improve the level of high availability beyond single cluster deployments. Customers who want their services to span one or more (possibly remote) clusters, need them to be reachable in a consistent manner from both within and outside their clusters.

Federated Service at its core contains a template (definition of a kubernetes service), a placement(which clusters to be deployed into), an override (optional variation in particular clusters) and a ServiceDNSRecord (specifying details on how to discover it).

Note: The Federated Service has to be of type LoadBalancer in order for it to be discoverable across clusters.

Discovering a Federated Service from pods inside your Federated Clusters

By default, Kubernetes clusters come preconfigured with a cluster-local DNS server, as well as an intelligently constructed DNS search path which together ensure that DNS queries like myservice, myservice.mynamespace, some-other-service.other-namespace, etc issued by your software running inside Pods are automatically expanded and resolved correctly to the appropriate service IP of services running in the local cluster.

With the introduction of Federated Services and Cross-Cluster Service Discovery, this concept is extended to cover Kubernetes services running in any other cluster across your Cluster Federation, globally. To take advantage of this extended range, you use a slightly different DNS name (e.g. myservice.mynamespace.myfederation) to resolve federated services. Using a different DNS name also avoids having your existing applications accidentally traversing cross-zone or cross-region networks and you incurring perhaps unwanted network charges or latency, without you explicitly opting in to this behavior.

Lets consider an example: (The example uses a service named nginx and the query name for described above)

A Pod in a cluster in the us-central1-a availability zone needs to contact our nginx service. Rather than use the service’s traditional cluster-local DNS name (nginx.mynamespace, which is automatically expanded to nginx.mynamespace.svc.cluster.local) it can now use the service’s Federated DNS name, which is nginx.mynamespace.myfederation. This will be automatically expanded and resolved to the closest healthy shard of my nginx service, wherever in the world that may be. If a healthy shard exists in the local cluster, that service’s cluster-local IP address will be returned (by the cluster-local DNS). This is exactly equivalent to non-federated service resolution.

If the service does not exist in the local cluster (or it exists but has no healthy backend pods), the DNS query is automatically expanded to nginx.mynamespace.myfederation.svc.us-central1-a.us-central1.example.com. Behind the scenes, this is finding the external IP of one of the shards closest to my availability zone. This expansion is performed automatically by cluster-local DNS server, which returns the associated CNAME record. This results in a traversal of the hierarchy of DNS records, and ends up at one of the external IP’s of the Federated Service near by.

It is also possible to target service shards in availability zones and regions other than the ones local to a Pod by specifying the appropriate DNS names explicitly, and not relying on automatic DNS expansion. For example, nginx.mynamespace.myfederation.svc.europe-west1.example.comwill resolve to all of the currently healthy service shards in Europe, even if the Pod issuing the lookup is located in the U.S., and irrespective of whether or not there are healthy shards of the service in the U.S. This is useful for remote monitoring and other similar applications.

Discovering a Federated Service from Other Clients Outside your Federated Clusters

For external clients, automatic DNS expansion described is currently not possible. External clients need to specify one of the fully qualified DNS names of the federated service, be that a zonal, regional or global name. For convenience reasons, it is often a good idea to manually configure additional static CNAME records in your service, for example:

SHORT NAME CNAME
eu.nginx.acme.com nginx.mynamespace.myfederation.svc.europe-west1.example.com
us.nginx.acme.com nginx.mynamespace.myfederation.svc.us-central1.example.com
nginx.acme.com nginx.mynamespace.myfederation.svc.example.com

That way your clients can always use the short form on the left, and always be automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes Cluster Federation.

As a further reading a more elaborate guide for users is available at Multi-Cluster Service DNS with ExternalDNS Guide

Try it yourself

To get started with Federation V2, please refer to the user guide hosted on github.
Deployment can be accomplished with a helm chart, and once the control plane is available, the user guide’s example can be used to get some hands-on experience with using Federation V2.

Federation V2 can be deployed in both cluster-scoped and namespace-scoped configurations. A cluster-scoped deployment will require cluster-admin privileges to both host and member clusters, and may be a good fit for evaluating federation on clusters that are not running critical workloads. Namespace-scoped deployment requires access to only a single namespace on host and member clusters, and is a better fit for evaluating federation on clusters running workloads. Most of the user guide refers to cluster-scoped deployment, with the Namespaced Federation section documenting how use of a namespaced deployment differs. Infact same cluster can host multiple federations and/or same clusters can be part of multiple federations in case of Namespaced Federation.

Next Steps

As we noted in the beginning of this post, the multicluster problem space is extremely broad. It can be difficult to know exactly how to handle broad problem spaces without concrete pieces of software to frame those conversations around. Our hope in the Federation Working Group is that federation-v2 can be such a concrete artifact to frame discussions around. We would love to know experiences that folks have had in this problem space, how they feel about federation-v2, and what use-cases they’re interested in exploring in the future. Please feel welcome to join us at the sig-multicluster slack channel or at Federation WG meeting @7:30 PT in the future!

Read the whole story
sbanwart
9 hours ago
reply
Akron, OH
Share this story
Delete

Visual Studio Code November 2018

2 Shares

Visual Studio Code November 2018

Read the full article

Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
sbanwart
9 hours ago
reply
Akron, OH
Share this story
Delete

Traefik: A Dynamic Reverse Proxy for Kubernetes and Microservices

1 Share

An open source edge router with automated reconfigurability is finding a home in the world of Kubernetes-driven cloud native operations.

Emile Vauge had created Traefik three years ago as a side project while developing for a Mesosphere-based microservices platform. He was frustrated with the existing options for edge routing. “Traditional reverse proxies were not well-suited for these dynamic environments,” he told The New Stack.

Unlike traditional edge routers, Traefik reconfigures itself on the fly, without needing to be taken offline. This dynamic and automated reconfigurability can be essential for an architecture of containerized microservices, which can be moved around and scaled up on the fly by an orchestrator such as Docker Swarm or Kubernetes.

Traefik connects to the APIs for these orchestrators, updating its routing automatically as the orchestrators move their microservices around. “Each time something changes on the orchestrator, for example, if you deploy a new application, Traefik is notified and changes its configuration automatically,” said Vauge, who created a company around the technology, Containous.

This week, the company has introduced an enterprise version of the software, Traefik Enterprise Edition (TraefikEE), which can provide a scalable highly-available platform for business-critical deployments. The Beta of this package debuted at the KubeCon + CloudNativeCon, being held this week in Seattle.

 

The software has been picked up and used in production by a number of large organizations moving to microservices. It has collected more than 19,000 stars of appreciation on Github and has been downloaded more than 10 million times from DockerHub.

TraefikEE offers a way to easily install distributed Traefik instances across a cluster,  spread across multiple nodes using the Raft consensus. TraefikEE can safely store and replicate configurations and TLS certificates across the nodes, and communication across nodes is encrypted.

“This is the first reverse proxy that is able to be deployed in a cluster natively, without any third-party software,” Vauge said.

The control plane monitors the platform and services, stores topology changes, reconfiguring the data plane, managed separately, to update ingress routing dynamically.

Traefik itself also offers many of the standard features found on other edge routers, such as SSL termination.

KubeCon + CloudNativeCon is a sponsor of The New Stack, and provided transportation and lodging for the reporter to attend the event.

Feature image: Emile Vauge, at KubeCon + CloudNativeCon.

The post Traefik: A Dynamic Reverse Proxy for Kubernetes and Microservices appeared first on The New Stack.

Read the whole story
sbanwart
15 hours ago
reply
Akron, OH
Share this story
Delete

KubeCon: New Tools for Protecting Kubernetes with Policy

1 Share

The dynamic nature of cloud native platforms and the simplicity of deployment that containers bring aren’t always an advantage if they let developers create systems that aren’t secure or break company policy. And while what you deploy with a containerized application is the same every time, it doesn’t always stay the same if someone ends up adding extra tools or permissions to a cluster to fix a problem in production. Those manual interventions don’t scale, and neither does having policy be something your devops team has to implement by hand.

Whether policy is about meeting security, governance and compliance rules or just codifying what you’ve learned from past incidents and mistakes to make sure they don’t get repeated, it has to be applied automatically rather than manually to keep up with the speed and scale of cloud native technologies.

The admission controller webhooks introduced in beta in Kubernetes 1.9 that let Istio inject Envoy sidecars and allow automated provisioning of persistent volumes are also an excellent way of applying policy without recompiling the Kubernetes API server, whether that’s validating an image repository before deploying an object or enforcing unique ingress hostnames. If one team is using a specific ingress hostname, you can block other teams from using that so there aren’t conflicts.

Admission webhooks also enable whitelisting or blacklisting container registries, so you could restrict developers to using a corporate repository.

These webhooks will be executed whenever a resource is created, updated or deleted; they intercept requests to the Kubernetes controller after the request has been authenticated and authorized but before the object requested is persisted to etcd. They can be validating, mutating, or both.

Validating admission webhooks intercept requests and reject any that don’t comply with policy; they don’t make any changes to objects so they can run in parallel. That lets you restrict resource creation to match policy, like setting a team limit on the number of replicas a service can run with, blocking deployment of code that’s tagged as not ready for production or ensuring that all resources are labelled.

Mutating admission webhooks can’t run in parallel because they can make changes to objects by sending requests to the webhook server (which can be an HTTP server running in the cluster or a serverless function elsewhere); for example, adding the Envoy proxy as a sidecar mutates the object that’s deployed. Instead of simply rejecting requests, mutating admission webhooks can change the requests so they comply with policy and are allowed to complete; for example, adding required tags and labels to objects so they’re easy to audit by project or team, or changing the load balancer requested so it’s an internal load balancer.

 

Creating admissions webhooks can be complex, and one option is to use Open Policy Agent, an CNCF-hosted sandbox project (which means it’s experimental and not necessarily ready for production). This is a general-purpose policy engine that validates JSON against policies, so you can use the same tool to apply policy to, say, Kubernetes, Terraform, access to REST APIs and remote connections over SSH.

Policies are written as rules or queries in Rego, OPA’s declarative policy language, with deny rules specifying policy violations. The inputs and outputs are both JSON so you can update the policies without recompiling (and the JSON output can be the modified request that meets policy).

The new open source Kubernetes Policy Controller project from Microsoft’s Azure containers team is a validating and mutating admissions webhook that uses OPA; introducing the project at Kubecon this week Microsoft open source architect Dave Strebel said the Kubernetes Policy Controller would be moving into the OPA project soon.

The Kubernetes Policy Controller also extends the standard Kubernetes role-based access control (RBAC) authorization module to add a blacklist in front of it. All authorized requests can be blocked, including blocking the execution of kubectl commands on a pod. Or it can be used for auditing, to see if any policies are being violated on a specific cluster.

There are some sample policies in the repo already, including validating that ingress hostnames are unique across all namespaces and restricting all create, update or delete requests to resources to a named set of users. The project will also host sample policies contributed by the community, to give devops teams a library of policies to use; there’s a Slack channel for collaborating on policies.

The advantage of OPA and the Kubernetes Policy Controller is that you can decouple policy from applications, Strebel pointed out; policy can be written once and applied to multiple applications across the stack.

Using policies can add a little latency although for most applications he suggested it would be negligible. The deployment for the Kubernetes Policy Controller is three containers with policy running in memory; that adds a little overhead but makes it suitable for applications that are very sensitive to applications.

Rego will a new language for many developers and because the impact of applying policy can mean that requested objects and resources aren’t available, it’s important to get the policy rules right. It’s also important to be careful when mutating objects, Strebel noted; because the object gets changed, it isn’t what the developer expected to get back and that can cause unexpected behavior or different outcomes. But these are relatively minor drawbacks compared to the advantage of automatically enforcing policy on Kubernetes clusters and being able to audit that it’s being enforced.

Expect these tools to develop quickly, because policy is going to become increasingly important as Kubernetes deployments are used for more enterprise applications where compliance and governance is critical.

The Cloud Native Computing Foundation, and KubeCon+CloudNativeCon are sponsors of The New Stack.

Feature image: CNCF Co-chair Liz Rice, at KubeCon 2018, demonstrates how admission controller webhooks could block malicious YAML code.

The post KubeCon: New Tools for Protecting Kubernetes with Policy appeared first on The New Stack.

Read the whole story
sbanwart
16 hours ago
reply
Akron, OH
Share this story
Delete
Next Page of Stories