Cloud-native developer. Distributed systems wannabe. DevOps and continuous delivery. 10x troublemaker. DevOps Manager at VHT.
12769 stories
·
3 followers

This Week in Programming: The Next Y2K Is Already Here

1 Share

I remember the night of New Year’s Eve 1999 vividly. Well, vividly as far as those go. But there’s one part that sticks with me, as we all huddled in a friends basement wondering if planes might, indeed, just fall out of the sky.

At precisely midnight, we picked up the landline (yes, we still had those) because we’d heard that it was expected that so many people would pick up the phones to see if they still worked at the same time, that rather than the Y2K bug taking the phone system down, somehow that very act would instead. I guess if the world was going to burn, we wanted some part in it.

Well, no planes fell and the phones worked just fine and Y2K passed, to us at least, into the realm of joke and dismissive snicker. Of course, the reality is that it was a years-long effort by developers to the tune of $100 billion to make sure those planes didn’t fall out of the sky and that the infrastructure kept on keeping on.

Two decades have now passed since we cruised on by Y2K, and it’s now time to look at the next date-time related bug on the horizon — that of the 2038 problem. Much like the Y2K bug, 2038 is problematic because of “insufficient capacity of the chosen storage unit” explains the Wikipedia article on the topic. More specifically, Unix encodes time in a signed 32-bit integer time format representing the number of seconds passed since 00:00:00 UTC on Jan. 1, 1970, and times beyond 03:14:07 UTC on Jan. 19, 2038 cause an integer overflow.

This is suddenly a popular topic not simply because last Sunday represents 20 years until the problem hits for all Unix-based systems, but rather because it has already begun, at least according to developer John Feminella, whose Twitter thread has more than made the rounds this past week.

Feminella tells the tale of a client whose “nightly batch job” that had “never, ever crashed before, as far as anyone remembered or had logs for” went down last Sunday — 20 years to the day before 2038 was expected. Coincidentally, some changes had been made, so focus first went there, but soon enough the problem was isolated to the one that wasn’t supposed to hit for another couple decades.

“But by then, substantive damage had already been done because contributions hadn’t been processed that day. It cost about $1.7M to manually catch up over the next two weeks. The moral of the story is that Y2038 isn’t ‘coming’. It’s already here. Fix your stuff,” Feminella concluded.

The funny thing about the whole thing is, one moral you might conclude from that story is to not simply let code keep running forever without taking a look at it…and that’s pretty much how we ended up with the 2038 bug in the first place. The video below goes into a bit more detail…

This Week in Programming

  • “Another Unsafe Sh*tstorm”: If you’ve heard the rumblings, but missed the story, DevClass has the tale of a Rust framework developer who said “I’m done with Open Source” before apparently having second thoughts. The apparent “ragequit” was used in many examples last week as further evidence for the unsustainability of smaller open source projects, but therein lies so much more in terms of story morals and examples of people behaving badly online. That sounds like a reality TV show, doesn’t it? The project in question here is actix, which was open sourced in 2017 by Microsoft engineer Nikolay Kim. At the heart of the issue is the amount of “unsafe” code used in the project, which many contend was far too much for a language built on the idea of memory safety. The article lays out the numerous back-and-forths, but the further gist of the scenario is that actix is a small but very popular open source project that can’t keep up with the demands of the community using it. In response to a heated Reddit debate (which followed previous heated debates), the project’s sole creator up and quit before deciding that “it would be unfair to just delete repos” and handed over control to a contributor.
  • What Is Rust, Anyhow? Speaking of Rust, StackOverflow has an article for those of you who remain unfamiliar with the language that answers the question of what is Rust and why is it so popular? As the article notes, the language has been the most loved language for four years in a row on StackOverflow, but the fact remains that roughly 97% of survey respondents haven’t used Rust. So what is it? “The short answer,” they write, “is that Rust solves pain points present in many other languages, providing a solid step forward with a limited number of downsides.” The blog post breaks it down into different sections for different developers, whether you’re used to garbage collected, statically typed, or other systems languages. So, if Rust is something you’ve just heard of but not really looked at, this is a good place to start, to get all the ups, downs, and in-betweens.
  • A Haskell-esque Language for Distributed Systems: InfoWorld brings us the story of Unison, a functional language that touts immutable code that it says can “describe entire distributed systems with a single program.” The open source language is in a public alpha release and was “founded on the core notion that code is immutable and identified by its content,” writes InfoWorld, noting that a production release is expected later this year. The other core idea behind Unison is that it is made for building distributed systems. Now in a public alpha release, with multiple milestone alpha releases planned, Unison is due for a production release this year. If experimental, alpha languages are your cup of tea, head on over to the project website and the Unison GitHub repo to see more.
  • Python’s As Fast as Go and C++? When you think of Python, your first thought may not be “blazingly fast,” but iProgrammer brings us a scholarly paper on the topic that compares Python with Go and C++, using the N-queens problem as the benchmark. While the headline says that Python is as fast as Go and C++, that statement of course should be qualified, but is nonetheless warranted. The crux comes here: “Implementing this in Python, Go and C++ quickly demonstrated that Python was slow, but included error checking that was missing from C++. To see if things could be improved, the same code was compiled using the Numba Python compiler. […] once compiled using the Numba compiler it becomes competitive with C++ and Go in terms of execution speed while still allowing for very fast prototyping.” Also of note, upstart language Julia is also found to be among the fastest, although this mention is relegated to the appendix.
  • IntelliJ IDE in 2020: Lastly for the week, JetBrains’ IDE IntelliJ offers a peek into its features roadmap for 2020, this time looking at some more high level features. In the year ahead, IntelliJ IDEA will include localization for Asian markets, the ability to use the IDE as a general-purpose text editor (without it annoyingly creating an empty project file each time you do), git staging support, and further text completion based on machine learning. While we’re talking about JetBrains, all you Kotlin users (or non-users — they want you too) can go ahead and fill out the 2019 Kotlin census, which will help the team behind the popular Android Java alternative work in the year ahead to make the language better for you. If that’s not enough, know that by filling out the census, you also enter a raffle for KotlinConf tickets and a Kotlin t-shirt. So, there’s that.

Feature image by Elias Sch. from Pixabay.

The post This Week in Programming: The Next Y2K Is Already Here appeared first on The New Stack.

Read the whole story
sbanwart
22 hours ago
reply
Akron, OH
Share this story
Delete

Deploying Your First Knative Service with the Serverless Framework

1 Share

Deploying Your First Knative Service with the Serverless Framework

One of the biggest ongoing conversations that I see when talking about modern microservice architectures is people asking "Should I be running that on containers or serverless?". Well, that's not entirely true. In fact, it is usually more of a vehemently opinionated response about why I should be using one or the other. My favorite example of this ongoing conversation is probably Trek10's Serverless vs. Containers Rap Battle:

The somewhat surprising conclusion (considering the format of the discussion) is that both approaches are perfectly suited to different use cases.

How Kubernetes and Serverless Make Each Other Better

We can take this a slight step further and say that both architectural patterns provide us insight on the limitations and potential improvements of the other.

Kubernetes has a reputation for operational complexity that Serverless infrastructure like AWS Lambda aim to eliminate entirely. The broader community around Kubernetes is constantly innovating to create tools like Knative that address these concerns and simplify the experience for developers and operators.

Serverless technologies on the other hand, have a reputation for provider-imposed limitations such as cold starts and runtime length limits. Many of these concerns are starting to be addressed or are now solved problems on some platforms.

At Serverless, I think it's safe to say we think this conversation is a legitimate one and we want to contribute to it with new tools that support the best of both worlds. Because of this, the Serverless Framework now supports integrating with Knative - a tool to help build serverless applications on top of Kubernetes. We think that Knative can be a logical choice for many workloads, especially those that require multi-cloud portability either due to internal or regulatory requirements.

Getting Started

There are a few prerequisite steps to getting started with the Serverless Framework Knative plugin. First, you'll need a Kubernetes cluster with Knative installed. Because of the open source nature of Kubernetes you have a lot of different options for this. You might choose to install it on any of a plethora of cloud providers or even in your own data center. For this demo, we'll leverage Google Cloud Platform.

Creating a Kubernetes Cluster on the Google Cloud Platform

To get your Kubernetes cluster up and running in GCP, you'll need to create a Google Cloud Platform account.

Create Your GCP Account

Go to https://cloud.google.com/ and create an account. As of this tutorial, Google offers a $300 credit towards using GCP. I'll try and keep you within that credit allotment and the Google default limitations but keep in mind that while Kubernetes clusters can scale up and down, they still have a minimum node count of three and will be on even when they aren't in use. As the last step in this guide, I'll show you how to delete your cluster.

Create a Project in GCP

After you have an account up and running, you can create a project using the Google Cloud Console here. I named mine sls-kubernetes-project to keep things straight:

Screenshot of Google Cloud Project UI

Make sure that you keep a hold of that value whatever you name yours because we'll be using it later.

Install and Configure the Google SDK

To create your Kubernetes cluster and interface with it you'll need to use the Google SDK. It will provide a nice CLI interface to do everything you need. Depending on your operating system, you can get started here.

If you followed the installation instructions for the SDK you probably also authenticated it with your Google Cloud account. If not, you can do this with: gcloud auth login

When going through this process it should prompt you to select a project. Make sure to select the project you just created, sls-kubernetes-project in my case.

After that, let's set some environment variables to make creating our cluster a bit easier. We'll set one for our cluster name, our cluster zone (where we're deploying to in GCP), and our project name.

export CLUSTER_NAME=slsknative
export CLUSTER_ZONE=us-west1-c
export PROJECT=sls-kubernetes-project

I've used slsknative for my cluster name, you can use knative or something else that doesn't conflict with any other clusters you might have running and follows the naming conventions for a cluster.

Now make sure we set our project as the default in our Google Cloud CLI settings. You can check this with gcloud config list. If part of the output includes your project name in there like this you're good to go:

project = sls-kubernetes-project

Otherwise, set the project config with this command after you set the $PROJECT environment variable:

gcloud projects create $PROJECT --set-as-default

Next, let's enable some of the APIs for the services we're going to use on Google Cloud:

gcloud services enable \
  cloudapis.googleapis.com \
  container.googleapis.com \
  containerregistry.googleapis.com

After this command completes, we should be ready to create our Kubernetes cluster!

Create Your Kubernetes Cluster

Now for the hard part (sort of, Google makes the surprisingly easy). You'll use the following command to create a Kubernetes cluster in Google Cloud:

gcloud beta container clusters create $CLUSTER_NAME \
  --addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio \
  --machine-type=n1-standard-2 \
  --cluster-version=latest --zone=$CLUSTER_ZONE \
  --enable-stackdriver-kubernetes --enable-ip-alias \
  --enable-autoscaling --min-nodes=1 --max-nodes=10 \
  --enable-autorepair \
  --scopes cloud-platform

So, what's this doing? Well we're using GCP to create a new cluster and passing in some standard configuration to create a cluster that will work with Knative.

First, we add some addons like Istio that work well with Knative. We also specify the machine types we want to be in our cluster. I'm using slightly smaller machine types of n1-standard-2 because Kubernetes clusters have a minimum of three nodes and as of this demo Google Cloud limits newly-created accounts to 8 vCPUs in a single region. You can spin up a more robust cluster with larger instances, but you might end up needing to activate the account and make sure your limits are increased.

You'll notice that I also have auto-scaling enabled in this command, but in this case I might end up hitting some of those account limits if I scaled too far.

After the cluster finishes creating, you'll need to grant yourself admin permissions to administrate it. You can do that with this command:

kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin \
  --user=$(gcloud config get-value core/account)

Once you have those administrator permissions, you'll be able to use kubectl to interact with the cluster and install Knative. If you've already installed Docker on your machine before you may see a warning about kubectl here or later on that looks like this:

WARNING:   There are older versions of Google Cloud Platform tools on your system PATH.
  Please remove the following to avoid accidentally invoking these old tools:

  /Applications/Docker.app/Contents/Resources/bin/kubectl

Just make sure that you restart your terminal at this point. Likely, GCP changed your path in the installation process so you will then find kubectl at ~/google-cloud-sdk/bin/kubectl. If it didn't, just make sure you're using the Google Cloud SDK kubectl installation or some other recent installation. The easiest way to verify this is to enter which kubectl and confirm that it references the location in the Google Cloud SDK folder.

Installing Knative on Our Cluster

So now we're ready to install Knative on our Kubernetes cluster! First, we'll run this command which helps avoid race conditions in the installation process:

kubectl apply --selector knative.dev/crd-install=true \
--filename https://github.com/knative/serving/releases/download/v0.11.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.11.0/release.yaml \
--filename https://github.com/knative/serving/releases/download/v0.11.0/monitoring.yaml

Then we can actually complete the install with this command:

kubectl apply \
--filename https://github.com/knative/serving/releases/download/v0.11.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.11.0/release.yaml \
--filename https://github.com/knative/serving/releases/download/v0.11.0/monitoring.yaml

This will get all the Knative goodies we need into our Kubernetes cluster. While it installs, we just need to wait for a few minutes and monitor the installation of the Knative components until they are all showing a running status. We do that with these three commands:

kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-eventing
kubectl get pods --namespace knative-monitoring

Run them each every few minutes and then confirm that all of the results have a status of running. When that's complete, you should have Knative up and running in Kubernetes in the Google Cloud!

Using the Serverless Framework and Knative

Now that we've got our cluster and Knative setup we're ready to start using the Serverless Framework!

First, make sure you have at least Node.js 8+ installed on your local machine. Then, if you still need to install the Serverless Framework run the following npm command to install it on your machine:

npm install --global serverless

Next up we need to create a new Serverless Framework project with the knative-docker template and then change directories into that project:

serverless create --template knative-docker --path my-knative-project

cd my-knative-project

Because we’re using the serverless-knative provider plugin we need to install all the dependencies of our template with npm install before we do anything else. This will download the provider plugin that was listed as a dependency in the package.json file.

Next, let's take a look at the serverless.yml file in our project which looks like this:

service: my-knative-project

provider:
  name: knative
  # optional Docker Hub credentials you need if you're using local Dockerfiles as function handlers
  docker:
    username: ${env:DOCKER_HUB_USERNAME}
    password: ${env:DOCKER_HUB_PASSWORD}

functions:
  hello:
    handler: hello-world.dockerfile
    context: ./code
    # events:
    #   - custom:
    #       filter:
    #         attributes:
    #           type: greeting
    #   - kafka:
    #       consumerGroup: KAFKA_CONSUMER_GROUP_NAME
    #       bootstrapServers:
    #         - server1
    #         - server2
    #       topics:
    #         - my-topic
    #   - awsSqs:
    #       secretName: aws-credentials
    #       secretKey: credentials
    #       queue: QUEUE_URL
    #   - gcpPubSub:
    #       project: knative-hackathon
    #       topic: foo
    #   - cron:
    #       schedule: '* * * * *'
    #       data: '{"message": "Hello world from a Cron event source!"}'

plugins:
  - serverless-knative

This is the Serverless Framework service definition which lists Knative Serving components as functions with their potential event sources as events.

You might be asking, this looks too simple. How is the Serverless Framework connecting with my cluster? Well, by default, we're using the ~/.kube/config that was created on your machine when you setup your cluster. To get other developers started you'll also need to make sure they have access to your Kubernetes cluster and have their own kubeconfig file.

Also, one critical part of the above is the Docker Hub section. At the moment, that section allows you to specify credentials so that your local Docker image and code in the code directory can be sent into Docker Hub and used by Knative. In order to enable it to work you'll need to have a Docker Hub account and set the docker environment variables locally. On Mac you can set those environment variables like this:

export DOCKER_HUB_USERNAME=yourusername
export DOCKER_HUB_PASSWORD=yourpassword

Once the Docker Hub credentials are set as environment variables we can deploy a service to our Kubernetes cluster:

serverless deploy

After the process finishes, invoking our new service is as easy as:

serverless invoke --function hello

And congratulations! After you see a response, you've just deployed your first Serverless Framework service using Knative, Kubernetes and Google Cloud!

Now, if you need to remove the Knative Service you can use:

serverless remove

This should remove the Knative service but keep in mind that your Kubernetes cluster is still running! If you'd like to remove the cluster to save yourself some money you can run this command:

gcloud container clusters delete $CLUSTER_NAME --zone $CLUSTER_ZONE

That should delete your cluster, but to be safe make sure to also confirm that it worked by checking inside of the GCP UI for your cluster.

Now there's a lot more you can do as you continue to work with Knative. You'll probably want to try customizing your Docker containers with more interesting services, and integrate your Knative cluster with events from sources like Google Cloud Pub/Sub, Kafka, or AWS Simple Queue Service. There's a lot of possibilities and we can't wait to see what you do with it!

Are you interested in guides on particular event sources or topics related to Knative and Serverless Framework? Leave us a comment below!

Read the whole story
sbanwart
1 day ago
reply
Akron, OH
Share this story
Delete

The Serverless Framework Knative Integration

1 Share

Modern Microservices - Containers and Serverless

Over the last decade a lot has changed in the Cloud Computing landscape. While most of the application workloads were deployed as monolithic applications on dedicated servers or VMs in early 2009, we are now seeing a shift towards smaller and self-contained units of application logic which are deployed individually and together make up the whole application. This pattern of application development and deployment is often dubbed "Microservice Architecture". Its adoption was greatly accelerated when Docker, a container creation and management software was first released in early 2013 and Google decided to Open Source Kubernetes, a container orchestration system.

Nowadays complex applications are split up into several services, each of which deals with a different aspect of the application such as "billing", "user management" or "invoicing". Usually different teams work on different services which are then containerized and deployed to container orchestration systems such as Kubernetes.

Given that such software containers are self-contained and include all the necessary libraries and dependencies to run the bundled application, AWS saw a potential to offer a hosted service based on such containerized environments where individual functions could be deployed and hooked-up to existing event sources such as storage buckets which in turn invoke the function whenever they emits a events. It announced their new service offering called AWS Lambda in 2014.

While initially invented to help with short-lived, data processing related tasks, AWS Lambda quickly turned into the serverless phenomenon where applications are now split up into different functions which are executed when infrastructure components such as API Gateways receive a request and emit an event. We at Serverless, Inc. invested heavily in this space and released the Serverless Framework CLI, our Open Source tooling which makes it easier than ever to deploy, manage and operate serverless applications.

Given the huge adoption of serverless technologies due to their properties such as cost-, management, and resource-efficiency, Google decided to open source Knative, a Serverless runtime environment which runs on top of Kubernetes in 2018. Since its inception several companies joined the Knative effort in making it easier than ever to deploy and run serverless workloads on top of Kubernetes.

Containers vs. Serverless

Given all those new technologies developers are often confused as to which technology they should pick to build their applications. Should they build their application stack in a microservice architecture and containerize their services to run them on top of Kubernetes? Or should they go full serverless and split their application up into different functions and connect them to the underlying infrastructure components which will invoke the functions when something happens inside the application?

This is a tough question to answer and the right answer is: "it depends". Some long-running workloads might be better suited to run in containers, while other, short-lived workloads might be better deployed as a serverless function which automatically scales up and down to 0 if not used anymore.

Thanks to Knative there doesn’t have to be the question of "either containers, or serverless". Knative makes it possible to use both, "container and serverless" workloads in one and the same Kubernetes cluster.

Announcing the Serverless Knative provider integration

Today we’re excited to announce the Serverless Framework Knative provider integration!

Our serverless-knative provider plugin makes it easy to create, deploy and manage Knative services and their event sources.

This first beta release comes with support to automatically build and deploy your functions as Knative Serving components and to use them as event sources via the Knative Eventing component. All such workloads and configurations can be deployed on any Kubernetes cluster, whether it’s running in the cloud, on bare metal or your local machine.

While working on this integration we focused on ease of use and therefore abstracted some of the rather involved implementation details away into a cohesive developer experience our Serverless Framework users are already familiar with.

Are you excited and want to learn more? Take a look at our tutorial to get started with your first service!

Read the whole story
sbanwart
1 day ago
reply
Akron, OH
Share this story
Delete

How to Troubleshoot Serverless API’s

2 Shares

Building API’s is an order of magnitude the most common use case we see for Serverless architectures. And why not? It’s so easy to combine API Gateway and AWS Lambda to create API endpoints that have all the disaster recovery and load management infrastructure you need by default. Combine that with the Serverless Framework and creating them is as easy as:

functions:
  myfunction:
    handler: myhandlerfile.myhandlerfunction
      events:
        - http:
          path: myendpoint
          method: get

But how do we go about debugging and troubleshooting our APIs? CloudWatch within AWS does (sort of) give us easy access to our Lambda logs, and we can turn on API Gateway logging. But this doesn’t provide us all the info we need if our API begins to have trouble.

This is pretty much the entire reason why we created Serverless Framework Pro, as a way to help users of the Serverless Framework to monitor and debug their Serverless services; APIs being chief among them.

And if this is the first time you are hearing about this, let me introduce you to the the Serverless Framework Pro dashboard with a 2 minute Youtube video to get you up to speed.

If you would like to know how to connect one of your services to the dashboard, make sure you have the most recent version of Serverless installed (npm i -g serverless or if you use the binary version serverless upgrade) and then run the command serverless in the same folder as your service. You will be walked through setting everything up.

Log to CloudWatch

When you are trying to debug you need to have data in order to help you determine what may have caused any problems. The easiest way to do that is to make sure you use your runtime's logging method when you need to. For example, in a NodeJS Lambda, we can capture any errors that come up when we make calls to other AWS resources such as DynamoDB, for example. Writing code that logs out the appropriate error data in this case may look something like this:

const query = {
  TableName: process.env.DYNAMODB_USER_TABLE,
  KeyConditionExpression: '#id' = ':id',
  ExpressionAttributeNames: {
    '#id': id
  },
  ExpressionAttributeValues: {
    ':id':'someid'
  }
}
let result = {}
try {
  const dynamodb = new AWS.DynamoDB.DocumentClient()
  result = await dynamodb.query(userQuery).promise()
} catch (queryError) {
  console.log('There was an error attempting to retrieve the data')
  console.log('queryError', queryError)
  console.log('query', query)
  return new Error('There was an error retrieving the data: ' + queryError.message)
}

With this arrangement, it means if for some reason our query to DynamoDB errored out, looking at the logs would indicate exactly why. And the same pattern can be applied to almost all types of code that has the possibility of erroring out while executing.

Aggregate monitoring

Before we can troubleshoot any specific errors, often it can be hard to tell if any errors are happening in the first place! Especially when you are dealing with a busy production system, it can be hard to tell if your users are experiencing any errors and this is where Serverless Framework Pro comes into its own with the service overview screen.

By just glancing at the charts provided here, you can immediately see if any API requests or Lambda invocations have returned as errors and in some way affected your users, even if they themselves are not aware of it.

Image showing error bars

With the image above, I don’t need to wait for a user to complain or report an error, I can instantly see that some errors start happening around 7pm. But it doesn’t end there. It would be even better if I am not required to be watching these charts and I just get notified if something happens.

This is where the Serverless Framework Pro notifications come into it. By going into my app settings, and choosing notifications in the menu, I can configure to have notifications sent to an email address or several, a Slack channel, call a webhook or even send the notification to SNS so I can have own Lambda function, for example, process those notifications as I want.

Notifications options

You can configure these per service and per stage and have as many notification configurations as you wish; perhaps dev stages report via email since they aren’t critical but errors in production always go to a Slack channel for the whole team.

Retrieving error details

Since I am now able to see and be alerted to errors, I need some way to help me figure out what the error is and how to fix it. This becomes relatively easy with Serverless Framework Pro again.

Overview showing errors

You start off with an overview screen such as this and I see some errors. Let me click on that…

Errors List

Now I can see some summary information about the errors within that time frame. Let me select one to drill down further

Stack trace and logs

Scrolling down a bit on the next view I can see that Serverless Framework Pro is giving me a stack trace of the line of code in my handler that threw the error so I know exactly where to look. And because of my detailed console.log lines, my CloudWatch log shows me the data related to the error. (Obviously I deliberately generated an error for demo purposes here, but the same applies for actual errors as well).

NOTE: CloudWatch logs are pulled in from your AWS account. They are not stored anywhere within Serverless Framework Pro, so when I open this detailed view, Serverless Framework Pro makes a request to your AWS account to retrieve the logs. If you delete the CloudWatch log from your account it won’t be visible here either.

Prevention is better than cure

Up till now we’ve been looking at how to react to errors. But we can even take it one step further and keep our eyes out for issues that may cause a problem later. For example, if we have Lambda functions that generally run for a certain amount of time, say between 50 and 100 ms, and suddenly there is a spike where our Lambdas are running for over 200ms, this could indicate a potential problem brewing; perhaps some downstream provider is having issues and if we could get some warning ahead of time we could perhaps head that off at the pass. The same thing could apply for invocation count. Maybe we usually get a very steady flow of activity on our Lambda invocations and any sudden spike in invocations is something we need to know about.

Serverless Framework Pro already creates these alerts for you automatically and you can choose to have notifications of these alerts sent to you using the notifications system shown before.

Performance tweaking

Troubleshooting doesn’t have to be all about errors. We may need to meet certain performance criteria, and Serverless Framework Pro gives us ways to assess this too.

Assessing execution time

Every Lambda function can have a memory size value set. But this setting is not just for memory and also affects CPU and network allocation as well in a linear way; if you double memory you double effective CPU and network. By clicking through to the functions section on the menu on the left, and then selecting a specific function, you can see duration statistics with dashed vertical lines for deployment. Now you can immediately see how a change you makes affects the average execution time of your invocations after a deployment.

Function Duration Change

And you can do exactly the same for memory usage...

SDK and HTTP requests

Often in a Lambda we need to make requests to other AWS services via the AWS SDK or even HTTP requests out to other 3rd party services, and these can have definite impact on the performance of our endpoints. So being able to gauge this impact would be really useful.

Again, Serverless Framework Pro makes it possible to investigate this. Within the detailed view of a Lambda, we can see the spans section that will indicate to us if our outgoing requests are slower than they should be. Remember the issue with third party services mentioned above? Well, with spans we can see how long requests can take and then take appropriate action.

Spans for AWS SDK

Pushing data at runtime

However, not all the data we want to look at is as vanilla and easy to capture as we have seen so far. Sometimes we need to be able to analyse metrics and data that is only available at runtime. Which is why the Serverless Framework Pro SDK incorporates a number of features to help track this data a little easier. By default, Serverless Framework Pro overloads the context object at runtime, and provides some additional functions to use for runtime data capture.

All these options are documented on the Serverless website and include options for Node and Python runtimes.

Capture Error

There may be cases where we would like to know about a potential error without actually returning an error to the end user making the request. So instead we can use the captureError method:

if (context.hasOwnProperty('serverlessSdk')) {
  context.serverlessSdk.captureError('Could not put the user')
}
return {
    statusCode: 200,
    headers: {
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
      'Access-Control-Allow-Headers': 'Authorization'
    }
  }

As you can see from the above, we just push an error message out but ultimately return a 200 response. And our monitoring will show it as an error.

Captured Errors

Capture Span

And we can do the same for capturing any code that may take time to execute. We can wrap that code in our own custom span and see the performance data made available to us:

if(context.hasOwnProperty(‘serverlessSdk’)) {
  await context.serverlessSdk.span('HASH', async () => {
    return new Promise((resolve, reject) => {
      bcrypt.hash("ARANDMOMSTRING", 13, () => {
        resolve()
      })
    })
  })
}

The above produces the following span:

Custom Span of the Hash

You can immediately tell, just looking at that, that your focus for any optimisation needs to be on that HASH span. Trying to optimise anything else wouldn’t make sense.

Capture Tag

Lastly, there exists a way to capture key-value pairs from invocations at run time that can be filtered for in the explorer view. Maybe an example will make this a little easier to grasp.

You have built a checkout process that captures a users credit card details and then passes those details onto a third party payment provider. A lot of us will have built such functionality in the past. And usually what happens is that the response, after passing those details, will indicate success or failure and actually even explain why it failed; lack of funds, expired card, declined by bank, etc. We can tag these various states to make it possible for us to search through these easier. It basically lets you pass a key, a value and additional context data if you need it:

if (paymentProvider.status === ‘success’) {
  context.serverlessSdk.tagEvent('checkout-status', event.body.customerId, paymentProvider.response
  });
}

This allows you to find all invocations that relate to a specific customer ID so if we ever needed to find the very specific logs from the payment provider processing the card details we can easily filter by that customer ID.


Serverless Framework Pro has a generous free tier for anyone building a Serverless application to use. It requires nothing more than signing up here.

If you would like to see these features in action, then feel free to [sign up for our webinar)[https://serverless.zoom.us/webinar/register/WN_7GpfDR5sT-qsUmovARuvrg] on 6 February.

Read the whole story
alvinashcraft
15 hours ago
reply
West Grove, PA
sbanwart
1 day ago
reply
Akron, OH
Share this story
Delete

F# Weekly #4, 2020 – Azure Functions 3.0 GA, How Fantomas works and F# on Win 3.11

2 Shares

Welcome to F# Weekly,
A roundup of F# content from this past week:

News

Videos and Slides

Blogs

F# vNext

GitHub projects

New Releases

That’s all for now. Have a great week.

Previous F# Weekly edition – #3, 2020Subscribe





Read the whole story
sbanwart
1 day ago
reply
Akron, OH
alvinashcraft
1 day ago
reply
West Grove, PA
Share this story
Delete

What does “Max # of APIs” mean in Postman?

2 Shares

This question popped up a few times from readers of our recent “Announcing Updated Postman Plans and Pricing” blog post, and we wanted to give you a full answer to the question with some additional background.

Here’s the short answer to this question:
The “maximum number of APIs” refers to the number of APIs you can create in Postman. It does NOT refer to how many APIs you can access with Postman; to be clear, there is no limit to the number of APIs you can access with Postman.

Here’s a bit more background on creating APIs in Postman:
Postman now enables you to create APIs directly within Postman via the API feature. Postman’s API feature includes:

  • The API tab and API elements
  • Extended schema support
  • Versioning and version tagging

With the API feature, you can define and manage different versions of your APIs, collection revisions, and other API elements linked to APIs like Postman collections, monitors, and mock APIs. You can also coordinate API changes easily with the help of API versioning and tagging.

And there are more benefits: your schema becomes the source of truth which defines every variation of the API; collections can be used as individual recipes that use endpoints made available by the API, tailored to specific use cases; and when you map your real-world APIs into the new API tab, you can define, develop, test, and observe them directly within Postman.

Note: The “maximum number of APIs” limit applies to all team members, regardless of whether the APIs are created via the API feature in a personal workspace or a team workspace.

If you have further questions about these API limits in relation to your Postman plan, feel free to drop us a line anytime at thepostmanteam@postman.com.

The post What does “Max # of APIs” mean in Postman? appeared first on Postman Blog.

Read the whole story
sbanwart
1 day ago
reply
Akron, OH
alvinashcraft
1 day ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories