Integration specialist. Linux aficiando. Web tinkerer. BizTalk and .NET developer. Chief Propeller Head at StoneDonut, LLC.
6011 stories
·
1 follower

Making it easier to contribute to Netflix OSS

2 Shares

by Travis McPeak and Andrew Spyker

Contributing to open source software can be a very rewarding experience that creates opportunities to learn about new problems and technologies, apply problem solving skills, meet and work with new people, and join a community pursuing a common goal. Getting started can also be confusing and full of questions:

  • What should you start working on?
  • Who do you ask for help or direction?
  • What kind of style expectations do the maintainers of the project have?

To make it easier to get started, each of the projects featured at our recent OSS meetup (Repokid, BetterTLS, Stethoscope, and Hub Commander) provide contributing guidelines, host an online community where developers can communicate, and have tagged issues where help is wanted.

Every open source contributor starts with a first project, and we know first-hand how difficult it can be to get started working on a new project. To make the project on-boarding process easier we’re creating contributing guidelines for our projects. The contributing guidelines explain how to get started, test new features, write code that adheres to coding standards for the project, and get reviews when the change is ready.

Even with guidelines, new contributors often need a quick way to get feedback about ideas, features, or the best way to implement something. Each of the above mentioned projects has an online community chat (usually on Slack or Gitter) where developers can ask questions or bounce ideas off other project developers. Get started by finding the community in the contributing guidelines or the project README.

Finally, sometimes a contributor may want to get started on a project but not be sure what to work on. To help them get started we’re tagging issues that are appropriate for new contributors to the project (generally using the tag “difficulty: newcomer”, but please see contributing guidelines). These issues are perfect for a new developer or person unfamiliar with the project to get started. Developers already familiar with the project may also want to look at issues tagged with “help wanted”. Tackling these issues may take a while longer but will make a real difference for the project.

We hold regular open source meetups at Netflix. Our most recent meetup featured security projects that want new contributors and have taken steps to make contributing easier than ever. If these changes are successful at making it easier for new contributors, we’ll expand them to other projects in the future. We look forward to your contributions!


Making it easier to contribute to Netflix OSS was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
sbanwart
1 hour ago
reply
Akron, OH
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Introducing the Amazon DynamoDB DataMapper for JavaScript – Developer Preview

1 Share

We’re happy to announce that the @aws/dynamodb-data-mapper package is now in Developer Preview and available for you to try via npm.

The Amazon DynamoDB DataMapper for JavaScript is a high-level client for writing and reading structured data to and from DynamoDB, built on top of the AWS SDK for JavaScript.

Getting started

You can install the @aws/dynamodb-data-mapper package using npm or Yarn:


$ npm install --save @aws/dynamodb-data-mapper
$ yarn add @aws/dynamodb-data-mapper

In this blog post, we will also use the @aws/dynamodb-data-mapper-annotations package, which makes it easier to define your models in TypeScript. Using this package requires an additional installation step (yarn add @aws/dynamodb-data-mapper-annotations) and requires that your project be compiled with the experimentalDecorators and emitDecoratorMetadata options enabled. For details on these options, see TypeScript’s handbook on decorators.

If your application doesn’t use TypeScript or you don’t to enable decorator processing in your application, you can use the mapper without the annotations package. We provide a full example in Defining a model without TypeScript in this blog post.

Defining a model

You can map any JavaScript class to a DynamoDB table by using the decorators supplied by the @aws/dynamodb-data-mapper-annotations package:

import {
    autoGeneratedHashKey,
    rangeKey,
    table,
} from '@aws/dynamodb-data-mapper-annotations';

@table('posts')
class Forum {
    @autoGeneratedHashKey()
    id: string;
    
    @rangeKey()
    createdAt: Date;
}

The attribute, hashKey, and rangeKey decorators attempt to use typing data emitted by the TypeScript compiler to infer the correct DynamoDB data type for a given property:

import {
    attribute,
    autoGeneratedHashKey,
    rangeKey,
    table,
} from '@aws/dynamodb-data-mapper-annotations';

@table('posts')
class Post {
    @autoGeneratedHashKey()
    id: string;
    
    @rangeKey()
    createdAt: Date;
    
    @attribute()
    authorUsername: string;
    
    @attribute()
    title: string;
}

You can also define embedded documents by omitting the table annotation:

import {
    attribute,
    autoGeneratedHashKey,
    rangeKey,
    table,
} from '@aws/dynamodb-data-mapper-annotations';

class PostMetadata {
    @attribute()
    draft: boolean;
    
    @attribute({memberType: 'String'})
    tags: Set<string>;
}

@table('posts')
class Post {
    // Attributes as defined in the previous example
    
    @attribute()
    metadata: PostMetadata;
}

Or you can include untyped or loosely typed attributes:

import {attribute} from '@aws/dynamodb-data-mapper-annotations';

class MyClass {
        @autoGeneratedHashKey()
        key: string;
        
        @attribute()
        untyped: any;
        
        @attribute()
        untypedList: Array<any>;
}

Check out the annotation package’s README for more examples.

Defining a model without TypeScript

You can also define a model in JavaScript by attaching a table name and schema using the DynamoDbTable and DynamoDbSchema symbols. These symbols are exported by the @aws/dynamodb-data-mapper package. The Post model defined previously could be defined without annotations as follows:

const {
    DynamoDbSchema,
    DynamoDbTable,
    embed,
} = require('@aws/dynamodb-data-mapper');
const v4 = require('uuid/v4');

class Post {
    // Declare methods and properties as usual
}

class PostMetadata {
    // Methods and properties
}

Object.defineProperty(PostMetadata.prototype, DynamoDbSchema, {
    value: {
        draft: {type: 'Boolean'},
        tags: {
            type: 'Set',
            memberType: 'String'
        }
    }
});

Object.defineProperties(Post.prototype, {
    &lsqb;DynamoDbTable&rsqb;: {
        value: 'Posts'
    },
    &lsqb;DynamoDbSchema&rsqb;: {
        value: {
            id: {
                type: 'String',
                keyType: 'HASH',
                defaultProvider: v4,
            },
            createdAt: {
	              type: 'Date',
	              keyType: 'RANGE'
            },
            authorUsername: {type: 'String'},
            title: {type: 'String'},
            metadata: embed(PostMetadata)
        },
    },
});

For more information about the supported field types, see the README.

Operations with DynamoDB Items

With a model defined and its corresponding table created, you can create, read, update, and delete objects from the table. Let’s create a post using the model defined previously:

import {Post} from './Post';
import {DataMapper} from '@aws/dynamo-data-mapper';
import DynamoDB = require('aws-sdk/clients/dynamodb');

const client = new DynamoDB({region: 'us-west-2'});
const mapper = new DataMapper({client});

const post = new Post();
post.createdAt = new Date();
post.authorUsername = 'User1';
post.title = 'Hello, DataMapper';
post.metadata = Object.assign(new PostMetadata(), {
    draft: true,
    tags: new Set(['greeting', 'introduction', 'en-US'])
});

mapper.put({item: post}).then(() => {
    // The post has been created!
    console.log(post.id);
});

With the post’s ID, you can retrieve the record from DynamoDB:

const toFetch = new Post();
toFetch.id = postId;
const fetched = await mapper.get({item: toFetch})

Or you can modify its contents:

fetched.metadata.draft = false;
await mapper.put({item: fetched});

or delete it from the table:

await mapper.delete({item: fetched});

Querying and scanning

You can use the schema and table name defined in your model classes to perform query and scan operations against the table they represent. Simply provide the constructor for the class that represents the records within a table, and the mapper can return instances of that class for each item retrieved:

import {Post} from './Post';
import {DataMapper} from '@aws/dynamo-data-mapper';
import DynamoDB = require('aws-sdk/clients/dynamodb');

const client = new DynamoDB({region: 'us-west-2'});
const mapper = new DataMapper({client});

for await (const post of mapper.scan({valueConstructor: Post})) {
    // Each post is an instance of the Post class
}

The scan and query methods of the mapper return asynchronous iterators and automatically continue fetching new pages of results until you break out of the loop. Asynchronous iterators are currently a stage 3 ECMAScript proposal and may not be natively supported in all environments. It can be used in TypeScript 2.3 or later or with Babel using the @babel/plugin-transform-async-generator-functions plugin package.

To query a table, you also need to provide a keyCondition that targets a single value for the hash key and optionally expresses assertions about the range key:

import {MyDomainClass} from './MyDomainClass';
import {DataMapper} from '@aws/dynamo-data-mapper';
import {between} from '@aws/dynamodb-expressions';
import DynamoDB = require('aws-sdk/clients/dynamodb');

const client = new DynamoDB({region: 'us-west-2'});
const mapper = new DataMapper({client});

const iterator = mapper.query({
    valueConstructor: MyDomainClass,
    keyCondition: {
        hashKey: 'foo',
        rangeKey: between(10, 99)
    }
});

for await (const item of iterator) {
    // Each post is an instance of MyDomainClass
}

With both query and scan, you can limit the results returned to you by applying a filter:

import {Post} from './Post';
import {DataMapper} from '@aws/dynamo-data-mapper';
import {equals} from '@aws/dynamodb-expressions';
import DynamoDB = require('aws-sdk/clients/dynamodb');

const client = new DynamoDB({region: 'us-west-2'});
const mapper = new DataMapper({client});
const iterator = mapper.query({
    valueConstructor: Post,
    filter: {
        ...equals('User1'),
        subject: 'authorUsername'
    }
});

for await (const post of iterator) {
    // Each post is an instance of the Post class
    // written by 'User1'
}

You can execute both queries and scans against the base table or against an index. To execute one of these operations against an index, supply an indexName parameter when creating the query or scan iterator:

const iterator = mapper.scan({
    valueConstructor: Post,
    indexName: 'myIndex'
});

Get involved!

Please install the package, try it out, and let us know what you think. The data mapper is a work in progress, so we welcome feature requests, bug reports, and information about the kinds of problems you’d like to solve by using this package.

You can find the project on GitHub at https://github.com/awslabs/dynamodb-data-mapper-js.

Read the whole story
sbanwart
10 hours ago
reply
Akron, OH
Share this story
Delete

Marvin.JsonPatch 2.0.0 Released

2 Shares

A new major release of Marvin.JsonPatch has just been published (2.0.0).  This new release adds support for dictionaries  – many thanks to MartyZhou for the PR!  Moreover, thanks to the backport from Microsoft.AspNetCore.JsonPatch (which started out as a port of Marvin.JsonPatch + Marvin.JsonPatch.Dynamic), a lot of features and stability improvements that were added over the past few months are now also part of Marvin.JsonPatch.

Wondering what Marvin.JsonPatch is?

JSON Patch (https://tools.ietf.org/html/rfc6902) defines a JSON document structure for expressing a sequence of operations to apply to a JavaScript Object Notation (JSON) document; it is suitable for use with the HTTP PATCH method. The “application/json-patch+json” media type is used to identify such patch documents.

One of the things this can be used for is partial updates for REST-ful API’s, or, to quote the IETF: “This format is also potentially useful in other cases in which it is necessary to make partial updates to a JSON document or to a data structure that has similar constraints (i.e., they can be serialized as an object or an array using the JSON grammar).”

That’s what this package is all about. Web API supports the HttpPatch method, but there’s currently no implementation of the JsonPatchDocument in .NET, making it hard to pass in a set of changes that have to be applied – especially if you’re working cross-platform and standardization of your API is essential.  The package can be used the client and/or on the server.

You can find the new version at NuGet.   Happy coding! :)

Read the whole story
sbanwart
13 hours ago
reply
Akron, OH
alvinashcraft
15 hours ago
reply
West Grove, PA
Share this story
Delete

Introducing Team discussions

2 Shares

Working together on software is so much more than writing code. Processes like planning, analysis, design, user research, documentation, and general project decision-making all play a part in the build process. Now there's a new way to talk through projects with your team.

Demo of team discussions

Give every conversation a home (and a URL)

Team discussions provide your team and organization members a place to share information with each other. Gone are the days of having your issues cluttered with discussions or your pull requests flooded with lengthy conversations that aren’t related to your code changes. Team discussions give those conversations a home and a URL on GitHub, so they can be shared easily across the platform or saved to reference later.

Start discussions from your dashboard

To get started with team discussions, navigate to your dashboard while logged in and choose a team from the new "Your teams" section on the right sidebar. Then click on your team to go to the discussion view. From there you can start a new discussion or join in on an existing one.

Chat with your team in public or private

All organization members can see your discussion posts by default. Mark your post as private if you have something more sensitive to share. Only direct team members will have access to the private post and its replies.

Screenshot of a private post

Building on top of the nested teams functionality, notifications cascade from parent to children teams making it even easier to share important information throughout your organization.

Screenshot of team discussions

Get updates on conversations you care about

Having trouble staying in the know about what other teams within your organization are working on? Watch a team that you're not a member of to stay up to date on their public discussion activity. If you’re worried about getting too many notifications, that's okay, too! You can always subscribe or unsubscribe to individual posts or decide to un-watch an entire team if the flow of information is too much.

Screenshot of team discussions view

Support for team discussions in the GitHub API v3 and v4 and GitHub Enterprise is coming soon—and stay tuned for even more features, and functionality. Our goal is to provide you with a place to organize your thoughts, discuss ideas, and work through your team's toughest problems on GitHub.

To learn more, check out the documentation!

Read the whole story
sbanwart
13 hours ago
reply
Akron, OH
alvinashcraft
16 hours ago
reply
West Grove, PA
Share this story
Delete

Autoscaling in Kubernetes

2 Shares


Kubernetes allows developers to automatically adjust cluster sizes and the number of pod replicas based on current traffic and load. These adjustments reduce the amount of unused nodes, saving money and resources. In this talk, Marcin Wielgus of Google walks you through the current state of pod and node autoscaling in Kubernetes: .how it works, and how to use it, including best practices for deployments in production applications.

Enjoyed this talk? Join us for more exciting sessions on scaling and automating your Kubernetes clusters at KubeCon in Austin on December 6-8. Register Now >> 

Be sure to check out Automating and Testing Production Ready Kubernetes Clusters in the Public Cloud by Ron Lipke, Senior Developer, Platform as a Service, Gannet/USA Today Network.
Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
sbanwart
13 hours ago
reply
Akron, OH
Share this story
Delete

Jenkins on Azure update - ACI experiment and AKS support

2 Shares

I attended a Jenkins Meetup a while back and saw how the engineering team of a local company leveraged Jenkins pipelines and microservices architecture to simplify their build pipelines. It’s obvious to me that they have everything figured out, the Jenkins infrastructure is all set up and things are running well. I asked myself, what could Azure bring to the table?

Scaling out to Azure using Azure Container Agent, the Azure Container Instances (ACI) is ideal for transient workload like spinning up a container agent to run your build, test, and push the build artifacts to say Azure storage. You have an economical way to scale out without the burden of provisioning and managing the infrastructure. ACI also provides per second billing based on the capacity you need. Since spinning up an ACI agent is easy and fast, when your build is complete, just tear the agent down. You do not need to pay for any idle time.

And with Docker, you can create the build environment you need on demand. There is no need to update or patch servers, freeing up resouces from maintaining and upgrading your build agents.

If by now you are curious and wonder if you can move part of your build workload to the cloud, check out the step-by-step tutorial:

Jenkins ACI experiment

Leave us feedback on the pages if you have questions or any requests. We are listening.

Also, if you are wondering about AKS (Managed Kubernetes), we just added the support to our Azure Container Agent as well as to the plugin for deploying to Kubernetes on Azure.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
sbanwart
13 hours ago
reply
Akron, OH
Share this story
Delete
Next Page of Stories