Integration specialist. Linux aficiando. Web tinkerer. BizTalk and .NET developer. Chief Propeller Head at StoneDonut, LLC.
3822 stories
1 follower

Actor model and using of Akka.NET


Actor Model

In the same time when first object-oriented languages were emerging, another concept inspired by general relativity and quantum mechanics was taking shape – actor model. In general terms, the Actor model was defined 1973. and was developed on a platform of multiple independent processors in a network. Similar to the object-oriented approach, this essentially mathematical model, revolved around the concept of actors. An actor is the smallest structural unit of Actor model, and just like objects, they encapsulate data and behavior. In difference to objects, however, actors communicate with each other exclusively trough messages. Messages in actors are processed in a serial manner. According to the full definition of actors, they can do three things:

  • send a finite number of messages to other actors
  • create a finite number of new actors
  • designate the behavior to be used for the next message it receives

Actor Model

Some of the authors are claiming that actors are actually the most stringent form of objects. Let’s not forget, that objects in Smalltalk-80 could hold state, and send and receive messages, and that does sound awful like the definition of an actor. But apart from that, we can definitely see that there is a great benefit in this model. Especially in concurrent, parallel processing environments and distributed systems. This is due to the fact that actors can affect each other only using messages, and by that, all locks are eliminated. Also, we can find a use for this concept in rising world of microservices. We can consider that each microservice is, in fact, an actor in its own process.
What is great about this model is that we can apply best object-oriented practices on it. It seems that it is natural to use actors in combination with Single responsibility principle, and make each actor do one thing (again pushing us to the concept of microservices). Also,  we should notice the importance of messages. They are no longer just carriers of data, but also in a more abstract manner, carriers of behavior. What an actor will do is depending on what message it received. This brings us to the fact that in actor systems, messages should be kept immutable, so they don’t change in the middle of processing and by that affect behavior of the system. Also, this way race conditions would be minimal.
Another benefit of these systems is that they are inherently asynchronous. This can be considered limitation too because the synchronous behavior is harder to achieve.


Akka is toolkit which allows us to create actor system in an efficient and simple way in .NET enviroment.


To start with Akka.Net, you should first install the package in your project, using Package Manager Console:

Install-Package Akka

Also, to avoid warning about deprecated serialization, install Hyperion package too:

Install-Package Akka.Serialization.Hyperion -pre

and add this to your App.config file:

<section name="akka"              type="Akka.Configuration.Hocon.AkkaConfigurationSection, Akka" />

      <![CDATA[                       akka {             actor {               serializers {                 hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"               }               serialization-bindings {                 "System.Object" = hyperion               }             }                  ]]>

Simple Use Case

When working with actor system, the first thing we need to define is a message type on which actor will react to.

public class Message
    public string Text { get; private set; }

    public Message(string text)
        Text = text;

Once that is defined, actor class can be created, by implementing abstract ReceiveActor class:

public class MessageProcessor : ReceiveActor
    public MessageProcessor()
        Receive<Message>(message => Console.WriteLine("Received a message with text: {0}", message.Text));

And consume that actor like this:

public class Program
    static void Main(string[] arg)
        var system = ActorSystem.Create("ActorSystem");
        var actor = system.ActorOf<MessageProcessor>("actor");

        actor.Tell(new Message("This is first message!"));


How about something more complicated

Ok, that was one easy example to get you started on how Akka works in general. But let’s consider something a little bit more complicated. Let’s make the system that will collect data about how long each reader was reading the article. The system will look like something like this:

It will contain next actors:

  • Blog Actor – Drives the whole system and receives messages from the simulated frontend. It will delegate messages to the rest of the system.
  • Reporing Actor – Gathers data from users and blog, and displays data to console.
  • Users Actor – Parent of individual user actors, used to delegate messages to correct user.
  • User Actor – Calculates how much time has user spent on reading the certain article and forwards that information to Reporting Actor.

In order to drive the whole system there will be tree types of messages:

  • StartedReadingMessage – This message will indicate that user started reading the article.
  • StopedReadingMessage – This message will indicate that user stopped reading the article.
  • ReportingMessage – This message will be sent from User Actors

Since they all carry similar information there is base Message class. Here is its implementation:

public abstract class Message
        public string User { get; private set; }
        public string Article { get; private set; }

        public Message(string user, string article)
            User = user;
            Article = article;

We can see that base message contains information about the user and about the article. Rest of the messages are used for containing information about action which is performed:

    public class StartedReadingMessage : Message
        public StartedReadingMessage(string user, string article)
            : base(user, article) {}

    class StopedReadingMessage : Message
        public StopedReadingMessage(string user, string article)
            : base (user, article) {}

    public class ReportMessage : Message
        public long Milliseconds { get; private set; }

        public ReportMessage(string user, string article, long milliseconds)
            : base (user, article)
            Milliseconds = milliseconds;

The main program drives this simulation by initializing system as a whole and sending messages to the Blog Actor:

static void Main(string[] args)
    ActorSystem system = ActorSystem.Create("rubikscode");

    IActorRef blogActor = system.ActorOf(Props.Create(typeof(BlogActor)), "blog");

    blogActor.Tell(new StartedReadingMessage("NapoleonHill", "Tuples in .NET world and C# 7.0 improvements"));

    // Used for simulation.

    blogActor.Tell(new StartedReadingMessage("VictorPelevin", "How to use “Art of War” to be better Software Craftsman"));

    // Used for simulation.

    blogActor.Tell(new StopedReadingMessage("NapoleonHill", "Tuples in .NET world and C# 7.0 improvements"));

    // Used for simulation.

    blogActor.Tell(new StopedReadingMessage("VictorPelevin", "How to use “Art of War” to be better Software Craftsman"));


As mentioned before, Blog Actor delegates messages to the rest of the actors. It is also in charge of creating Users Actor and Reporting Actor. You may notice the use of the Context property of the actor, which is used for creating child actors. Also, there is use of Props configuration class, wich specify options for the creation of actors.

public class BlogActor : ReceiveActor
    private IActorRef _users;
    private IActorRef _reporting;

    public BlogActor()
        _users = Context.ActorOf(Props.Create(typeof(UsersActor)), "users");
        _reporting = Context.ActorOf(Props.Create(typeof(ReportActor)), "reporting");

        Receive<Message>(message => {

Users Actor caches information about users, and routes messages to each individual User Actor.

public class UsersActor : ReceiveActor
    private Dictionary<string, IActorRef> _users;

    public UsersActor()
        _users = new Dictionary<string, IActorRef>();

        Receive<StartedReadingMessage>(message => ReceivedStartMessage(message));
        Receive<StopedReadingMessage>(message => ReceivedStopMessage(message));

    private void ReceivedStartMessage(StartedReadingMessage message)
        IActorRef userActor;

        if(!_users.TryGetValue(message.User, out userActor))
            userActor = Context.ActorOf(Props.Create(typeof(UserActor)), message.User);
            _users.Add(message.User, userActor);


    private void ReceivedStopMessage(StopedReadingMessage message)
        IActorRef userActor;

        if (!_users.TryGetValue(message.User, out userActor))
            throw new InvalidOperationException("User doesn't exists!");


Implementation of User Actor goes as follows:

public class UserActor : ReceiveActor
    private Stopwatch _stopwatch;
    private bool _isAlreadyReading;

    public UserActor()
        _stopwatch = new Stopwatch();
        Receive<StartedReadingMessage>(message => ReceivedStartMessage(message));
        Receive<StopedReadingMessage>(message => ReceivedStopMessage(message));

    private void ReceivedStartMessage(StartedReadingMessage message)
        if (_isAlreadyReading)
            throw new InvalidOperationException("User is already reading another article!");

        _isAlreadyReading = true;

    private void ReceivedStopMessage(StopedReadingMessage message)
        if (!_isAlreadyReading)
            throw new InvalidOperationException("User was not reading any article!");

        _isAlreadyReading = false;

        Context.ActorSelection("../../reporting").Tell(new ReportMessage(message.User, message.Article, _stopwatch.ElapsedMilliseconds));


And last, but not the least here is the implementation of Reporting Agent. It gets data from each individual User Actor, and from Blog Actor, and calculates time spent on each blog post, and the number of views on each blog post.

public class ReportActor : ReceiveActor
    private Dictionary<string, long> _articleTimeSpent;
    private Dictionary<string, int> _articleViews;

    public ReportActor()
        _articleTimeSpent = new Dictionary<string, long>();
        _articleViews = new Dictionary<string, int>();

        Receive<ReportMessage>(message => ReceivedReportMessage(message));
        Receive<StartedReadingMessage>(message => IncreaseViewCounter(message));

    private void ReceivedReportMessage(ReportMessage message)
        long time;
        if (_articleTimeSpent.TryGetValue(message.Article, out time))
            time += message.Milliseconds;
            _articleTimeSpent.Add(message.Article, message.Milliseconds);

        Console.WriteLine("User {0} was reading article {1} for {2} milliseconds.", message.User, message.Article, message.Milliseconds);
        Console.WriteLine("Aricle {0} was read for {1} milliseconds in total.", message.Article, _articleTimeSpent[message.Article]);


    private void IncreaseViewCounter(StartedReadingMessage message)
        int count;
        if (_articleViews.TryGetValue(message.Article, out count))
            _articleViews.Add(message.Article, 1);

        Console.WriteLine("Article {0} has {1} views", message.Article, _articleViews[message.Article]);

This is how the result of this simulation looks like:
Blog Reporting Actor Simulation


Actor Model gives us a different way of solving problems. Once you get into the message-driven mindset, you’ll find the Actor Model to be of great value when it comes to designing large-scale, service-oriented systems. On the other side, Akka.NET gives us a framework in wich we can create these systems farly easy. Here we covered just basic uses of Akka.NET, but it has many more features that can help you.

If you need more info about actor model, I recommend this video.
And about more information about Akka.NET, you can visit their official site.

Read other posts from the author at Rubik’s Code.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Read the whole story
10 hours ago
Akron, OH
12 hours ago
West Grove, PA
Share this story

An empirical study on the correctness of formally verified distributed systems

1 Share

An empirical study on the correctness of formally verified distributed systems Fonseca et al., EuroSys’17

“Is your distributed system bug free?”

“I formally verified it!”

“Yes, but is your distributed system bug free?”

There’s a really important discussion running through this paper – what does it take to write bug-free systems software? I have a real soft-spot for serious attempts to build software that actually works. Formally verified systems, and figuring out how to make formal verification accessible and composable are very important building blocks at the most rigorous end of the spectrum.

Fonseca et al. examine three state-of-the-art formally verified implementations of distributed sytems: Iron Fleet, Chapar: Certified causally consistent distributed key-value stores, and Verdi. Does all that hard work on formal verification verify that they actually work in practice? No.

Through code review and testing, we found a total of 16 bugs, many of which produce serious consequences, including crashing servers, returning incorrect results to clients, and invalidating verification guarantees.

The interesting part here is the kinds of bugs they found, and why those bugs were able to exist despite the verification. Before you go all “see I told you formal verification wasn’t worth it” on me, the authors also look at distributed systems that were not formally verified, and the situation there is even worse. We have to be a little careful with our comparisons here though.

To find bugs in unverified (i.e., almost all) distributed systems, the authors sample bugs over a one year period, from the issue trackers of a number of systems:

These unverified systems are not research prototypes; they implement numerous and complex features, have been tested by innumerable users, and were built by large teams.

The unverified systems all contained protocol bugs, whereas none of the formally verified systems did. (Still, I’ve never met a user whom, upon having a system crash on them and generating incorrect results, said “Oh well, at least it wasn’t a protocol bug” 😉 ).

Now, why do I say we have to be a little careful with our comparisons? The clue is in the previous quote – the unverified systems chosen “have been tested by innumerable users.” I.e., they’ve been used in the wild by lots of different people in lots of different environments, giving plenty of occasion for all sorts of weird conditions to occur and trip the software up. The formally verified ones have not been battle tested in the same way. And that’s interesting, because when you look at the bugs found in the formally verified systems, they relate to assumptions about the way the environment the system interacts with behaves. Assumptions that turn out not to hold all the time.

Bugs in formally verified systems! How can that be?

The bugs found by the team fall into three categories. By far the biggest group of bugs relate to assumptions about the behaviour of components that the formally verified system interacts with. These bugs manifest in the interface (or shim layer) between the verified and non-verified components.

These interface components typically consist of only a few hundred lines of source code, which represent a tiny fraction of the entire TCB (e.g., the OS and verifier). However, they capture important assumptions made by developers about the system; their correctness is vital to the assurances provided by verification and to the correct functioning of the system.

Two of the sixteen found bugs were in the specification of the systems analyzed: “incomplete or incorrect specification can prevent correct verification.” The team also found bugs in the verification tools themselves – causing the verifier to falsely report that a program passes verification checks for example! All of these verifier bugs were caused by functions that were not part of the core components of the verifier.

Let’s come back to those misplaced assumptions though. What’s most interesting about them, is that many of these assumptions (with the benefit of hindsight!) feel like things the designers should obviously have known about and thought about. For example:

And these are developers trying their very best to produce a formally verified and correct system. Which I think just goes to show how hard it is to keep on top of the mass of detail involved in doing so.

There were also a few bugs found which would always be tough to discover, such as subtle gotchas lurking in the libraries used by the system.

In total, 5 of 11 shim layer bugs related to communication:

Surprisingly, we concluded that extending verification efforts to provide strong formal guarantees on communication logic would prevent half of the bugs found in the shim layer, thereby significantly increasing the reliability of these systems. In particular, this result calls for composable, verified RPC libraries.

How can we build real-world “bug-free” distributed systems?

After discovering these gaps left by formal verification, the authors developer a toolchain called “PK,” which is able to catch 13 of the 16 bugs found. This includes:

  • Building in integrity checks for messages, and abstract state machines
  • Testing for liveness using timeout mechanisms
  • A file system and network fuzzer
  • Using negative testing by actively introducing bugs into the implementation and confirming that the specification can detect them during verification.
  • Proving additional specification properties (to help find specification bugs). “Proving properties about the specification or reusing specifications are two important ways to increase the confidence that they are correct.”
  • Implementing chaos-monkey style test cases for the verifier itself. “We believe the routine application to verifiers of general testing techniques (e.g., sanity checks, test-suites, and static analyzers) and the adoption of fail-safe designs should become establish practices.”

Also of interest in this category are Jepsen,

Lineage-driven fault injection, Redundancy does not imply fault tolerance: analysis of distributed storage reactions to single errors and corruptions, and Uncovering bugs in distributed storage systems during testing (not in production!).

The answer is not to throw away attempts at formal verification (“we did not find any protocol-level bugs in any of the verified prototypes analyzed, despite such bugs being common even in mature unverified distributed systems“). Formal verification can bring real benefits to real systems (see e.g., Use of formal methods at Amazon Web Services). I was also delighted to see that Microsoft’s Cosmos DB team also made strong use of formal reasoning with TLA+:

“When we started out in 2010, we wanted to build a system – a lasting system. This was the database of the future for Microsoft… we try to apply as much rigor to our engineer team as we possible can. TLA+ has been wonderful in getting that level of rigor in a team of engineers to set the bar high for quality.” – CosmosDB interview with Dharma Shukla on TechCrunch

Instead we must recognise that even formal verification can leave gaps and hidden assumptions that need to be teased out and tested, using the full battery of testing techniques at our disposal. Building distributed systems is hard. But knowing that shouldn’t make us shy away from trying to do the right thing, instead it should make us redouble our efforts in our quest for correctness.

We conclude that verification, while beneficial, posits assumptions that must be tested, possibly with testing toolchains similar to the PK toolchain we developed.

Here are a few other related papers we’ve covered previous in The Morning Paper that I haven’t already worked into the prose above:

Read the whole story
10 hours ago
Akron, OH
Share this story

F# Weekly #22, 2017 with 2017 F# survey results


Welcome to F# Weekly,

A roundup of F# content from this past week:




F# vNext

Open source projects

New Releases

That’s all for now. Have a great week.

Previous F# Weekly edition – #21Subscribe


Filed under: F# Weekly Tagged: News:F# Weekly

Read the whole story
12 hours ago
West Grove, PA
1 day ago
Akron, OH
Share this story

Spinnaker Orchestration


Author: Rob Fletcher

When the Spinnaker project first started more than two years ago we implemented Orca — Spinnaker’s orchestration engine µservice — using Spring Batch. It wasn’t an entirely unreasonable fit at the time. It gave us atomic, compartmentalized units of work (tasks in a Spinnaker pipeline), retry semantics, the ability to define branching and joining workflows, listeners that could get notified of progress and many other things we needed. However, in the long run, that choice—and our implementation on top of it—imposed a number of constraints.

For a long time some of the operational constraints of Orca have been a source of frustration and not something we were proud of or keen to advertise.


The most obvious constraint was that Orca was a stateful service—it pinned running pipelines to a single instance. Halting that instance, whether due to a failure or a normal red-black deploy, would unceremoniously stop the pipeline in its tracks with no easy way to continue it.

In addition, Orca locked a thread for the entire duration of the pipeline, which although typically minutes long, are not infrequently hours or days. It did this even when the pipeline was doing nothing more than polling every so often for a change, waiting for a predefined duration or even awaiting manual judgment before continuing.

When deploying a new version of Orca we’d have to allow work to drain from the old server groups. Although we automated this process (monitoring instances until they were idle before shutting them down) it wasn’t uncommon for a handful of instances to be hanging around for days, each one draining one or two long running canary pipelines.

Because of the way we mapped pipelines to Spring Batch jobs we had to plan the entire execution in advance, which is very limiting. We were forced to jump through all kinds of hoops to build functionality like rolling push deployments on top of such a static workflow model. It was also very hard to later implement the ability for users to restart pipelines after a stage failed or to automatically restart pipelines dropped in the case of instance failure as the mapping of pipeline to Spring Batch job initially wasn’t idempotent.

As an aside, I should point out that most of the constraints we struggled with are not inherent limitations of Spring Batch. It’s very good at what it does. But it’s really not intended as a general-purpose workflow engine and certainly not designed with distributed processing in mind.

Sabrina’s Christmas Wish

Despite these issues, things hung together well enough. We were aware of the limitations and chafed against them, but they never bit us so badly that we prioritized tackling them over some of the new features we were working on. Although, as the engineer who implemented most of Orca in the first place, I was desperate to fix what I saw as being my own mess.

I finally got that chance when the volume of internal use at Netflix hit a point that we decided it was time to tackle our resiliency and reliability concerns. The fact that Orca is ticking over when running between 2000 and 5000 pipelines per day (peaking at over 400,000 individual task executions some days) is not too shabby. However, with Spinnaker existing as the control plane for almost the entire Netflix cloud and the project growing in use in the wider community we felt it was time to harden it and make sure resiliency was something we had real confidence in.

To that end, we recently rolled out a significant change we dubbed “Nü Orca”. I’d like to take some time to introduce the changes and what makes them such an improvement.

Brand New Couch

Instead of using Spring Batch to run pipelines, we decided to implement our own solution using a simple command queue. The queue is shared across all the instances in an Orca cluster. The queue API has push operations for immediate and delayed delivery and a pop operation with acknowledgment semantics. Any message that goes unacknowledged for more than a minute gets re-delivered. That way, if we lose an Orca instance that’s in the process of handling a message, that message is simply re-delivered and will get picked up by another instance.

Messages on the queue are simple commands such as “start execution”, “start stage”, “run task”, “complete stage”, “pause execution”, etc. Most represent desired state changes and decision points in the execution while “run task” represents the atomic units of work that pipelines break down into. The intention is that messages should be processed quickly — in the order of seconds at most when running tasks that talk to other services such as CloudDriver.

We use a worker class — QueueProcessor — to pop messages from the queue. It is invoked by Spring’s scheduler with a 10ms delay between polls. The worker’s only job is to hand each message off to the appropriate MessageHandler. Once a handler has processed a message without any uncaught exceptions, the worker acknowledges the message. The call to the handler and the acknowledgment of the message happen asynchronously using a dedicated thread pool so they do not delay the queue polling cycle.

Message handlers can add further commands to the queue. For example:

  • StartStageHandler identifies the sequence of sub-stages and tasks and then sends StartStage or StartTask commands to set them running.
  • StartTaskHandler records that a task is running then queues a RunTask command.
  • RunTaskHandler executes a task once and then either queues the same RunTask command with a delay if the task is not complete (for example, a task polling until a server group reaches a desired number of healthy instances) or a CompleteTask if the execution can move on.
  • CompleteStageHandler figures out what downstream stages are able to run next and queues a StartStage message for each or a CompleteExecution message if everything is finished.

…and so on.

This design allows work to naturally spread across the Orca cluster. There’s no requirement for a queued message to be processed by any particular Orca instance. Because un-acknowledged messages are re-queued, we can tolerate instance failure and aggressively deploy new versions of the service without having to drain work from older server groups. We can even turn Chaos Monkey loose on Orca!

Message handlers can also emit events using Spring’s application event bus that keep other listeners informed of the progress of a pipeline. We use this for sending email / Slack notifications and triggering downstream pipelines — things Orca was doing already. We’ve also added a log of the activity on a particular pipeline since trying to track distributed work using server logs will be next to impossible. Processing of these pub/sub events does currently happen in-process (although on a different thread).

We have in-memory, Redis and SQS queue implementations working. Internally at Netflix we are using the Redis implementation but having SQS is a useful proof-of-concept and helps us ensure the queue API is not tied to any particular underlying implementation. We’re likely to look at using dyno-queues in the long term.

Why Redis? Partially because we’re using it already and don’t want to burden Spinnaker adopters with further infrastructure requirements. Mainly because Redis’ speed, simple transactions and flexible data structures give us the foundation to build a straightforward queue implementation. Queue data is ephemeral — there should be nothing left once a pipeline completes — so we don’t have concerns about long-term storage.

Fundamentally the queue and handler model is extremely simple. It maps better to how we want pipelines to run and it gives us flexibility rather than forcing us to implement cumbersome workarounds.

Yes And

Ad-hoc restarts are already proving significantly easier and more reliable. Pipelines can be restarted from any stage, successful or not, and simultaneous restarts of multiple branches are no problem.

We have also started to implement some operational capabilities such as rate limiting and traffic shaping. By simply proxying the queue implementation, we can implement mechanisms to back off excessive traffic from individual applications, prioritize in-flight work or urgent actions like rollbacks of edge services, or pre-emptively auto-scale the service to guarantee capacity for upcoming work.

Let’s Find Out

If you want to try out the queue based execution engine in your own pipelines, you can do so by setting queue.redis.enabled = true in orca-local.yml.

Then, to run a particular pipeline with the queue, set “executionEngine”: “v3” in your pipeline config JSON.

To run ad-hoc tasks (e.g. the actions available under the “server group actions” menu), you can set the configuration flag orchestration.executionEngine to v3 in orca-local.yml.

Within Netflix, we are migrating pipelines across to the new workflow engine gradually. Right now, Nü Orca exists alongside the old Spring Batch implementation. Backward compatibility was a priority as we knew we didn’t want a “big bang” cut-over. None of the existing Stage or Task implementations have changed at all. We can configure pipelines individually to run on either the old or new “execution engine”.

As we gain confidence in the new engine and iron out the inevitable bugs, we’ll get more and more high-risk pipelines migrated. At some point soon I’ll be able to embark on the really fun part — deleting a ton of old code, kludgy workarounds and cruft.

After the Party

Once we’re no longer running anything on the old execution engine, there are a number of avenues that open up.

  • More efficient handling of long-running tasks.
  • Instead of looping, the rolling push deployment strategy can “lay track in front of itself” by adding tasks to the running pipeline.
  • Cancellation routines that run to back out changes made by certain stages if a pipeline is canceled or fails can be integrated into the execution model and surfaced in the Spinnaker UI.
  • The implementation of execution windows (where stages are prevented from running outside of certain times) can be simplified and made more flexible.
  • Determination of targets for stages that affect server groups can be done ad-hoc, giving us greater flexibility and reducing the risk of concurrent mutations to the cloud causing problems.
  • Using a state convergence model to keep an application’s cloud footprint in line with a desired state rather than simply running finite duration pipelines.

I’m sure there will be more possibilities.

I’m relieved to have had the opportunity to make Orca a better fit for Netflix’s distributed and fault-tolerant style of application. I’m excited about where we can go from here.

Spinnaker Orchestration was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
2 days ago
Akron, OH
Share this story

How to set up a React project without flipping tables


When I first started learning React, I went through the “Intro to React” tutorial by Facebook, which you can conveniently follow on Codepen. It all made sense to me and I felt like I was ready to write my own to-do list application, as you do when you’re learning a new JavaScript framework.

I installed some npm modules that sounded sensible, used Gulp as my task runner as I normally like doing, and… nothing worked. I ended up spending a whole afternoon trying to set up my project, and never got round to actually writing any code. Why was this so hard?

Based on anecdotal evidence from colleagues and friends, I know that I’m not the only one feeling this frustration. And I assume even the developers at Facebook must have experienced similar afternoons of banging their heads on their desks, because they have kindly created an open source React project starter kit.

I had a look through the repo, and it’s huge! It contains a lot of setup with dependencies that I had never heard of before. I also had a look through other React starter code and example repos, which left my head spinning from the sheer number of npm packages used. I’m not a specialist JavaScript developer, but I still like to understand what all the dependencies that I’m cloning into my project do. And ideally I would like to keep them to a minimum.

So, I decided to do some detective work in an attempt to find the most minimalist setup for a React project possible, and to really understand what each of the necessary dependencies does.

Because surely it can’t be that hard? Surely there must be a minimal setup with which you can write a React app?

The essentials

What we really need to get started with a React project are three ingredients: the React framework (surprise!), a tool that transpiles our React code into vanilla JavaScript, and a tool that tells the transpiler which files to look at and where to put the transpiled files.

ES5 or ES6?

Before we start going over our shopping list of npm packages, we need to make a decision about which version of JavaScript to use. ES6 is the new and improved version of JavaScript. However, it is still not understood by all browsers, which means that we need to transpile it into ES5, the old version of JavaScript. And for this job, we need an additional dependency. ES5 works with React and in the spirit of having a minimal setup, it would make sense to stick with ES5.

However, it is much, much more convenient to use ES6 with React. Imports are easier, extending the React class is simpler, function declarations are shorter, and more. There’s a great blog post on the npmjs website that compares React with ES5 and ES6 using loads of examples that illustrate the advantages of using ES6. Another argument for using ES6 is that most of the documentation and examples you find online use ES6.

Shopping list


Let’s start with the most obvious dependency: React! But behold: React is not enough. There are two packages that you need for a React web application: react and react-dom. That was my first source of confusion. I just wanted to write some React—why do I need two dependencies? Turns out that these packages used to be combined, but react-dom was split out about a year ago. The reason is that React can be used for applications other than websites (React Native, anyone?), and rather than having one big package for all purposes, they were separated out so you can pick and choose exactly what you need.


So you require the two React packages in your code, and now you can use the React syntax and functionality. Yay! But somehow the React code needs to be transpiled to JavaScript code in order for browsers to understand it. There are some npm packages that we can use to deal with this task. But these packages need to be held together by something. They need to be told which files to work with, and they need to be configured. It’s kind of like musicians in an orchestra that need a conductor to help them play together well. In this case, the React and the ES6 transpilers are musicians and the conductor job will be done by a file bundler like webpack or Browserify.

A bundler can use lots of different plugins to transform our source code. You just need to specify which plugins you want to use, which files should be processed by those plugins, and where to put the final product after the processing is finished.

Most React tutorials suggest webpack, but I’ve also worked with Browserify. Those two bundlers have slightly different philosophies: Browserify came first and was built to run Node.js code in the browser. webpack was created later, and with the primary focus of managing static assets for the front-end. Browserify is more modular, while webpack comes with more features out of the box. Browserify is driven by convention, while webpack is a lot more flexible, which makes it a bit harder to learn. The feature that ultimately sold me on webpack is hot module replacement. It is used together with the very convenient webpack dev server, which watches your files for any changes and reloads the page automatically. With hot module replacement, it will only reload the section of a page that is affected by any changes to a module. The advantage of this over refreshing the whole page is that all the other components will keep their state. Double bonus! No more command + shift + R repetitive strain injury, plus you don't have to potentially perform lots of user actions to get all your components back into a particular state.

However, this blog post is about the most minimal setup, and you can definitely use webpack without the dev server and hot module reloading. So, I won't include these features into the setup for now.

As a side note—there is a bit of a difference between task runners such as Gulp and Grunt and bundlers like webpack and Browserify. Generally, task runners are more concerned with overall automation like building your project, getting it ready for deployment, or running your tests. And bundlers are for concatenating all your files and translating your fancy frameworks into vanilla ES5. There is quite a bit of overlap of responsibilities though. It is possible to use webpack in combination with Gulp or Grunt, but a lot of people just use npm scripts to complete the tasks that webpack doesn’t handle.


Next on the shopping list is the transpiler, a tool that can translate ES6 to browser-friendly ES5 code and React to vanilla JavaScript. Babel can help us out here. We can plug it into webpack as a loader using the npm package called babel-loader. But once again, the Babel loader is not enough. We have to explicitly install the peer dependencies. To get all the functionality we require, we need three additional npm packages: babel-core, babel-preset-es2015 and babel-preset-react.

But that is really all we need to write a React application!

Step-by-step guide to get started

Let’s get started with the setup! You need to have Node installed to run all these commands. I use Node Version Manager to manage my versions but it’s not a must.

Let’s initialise a new project:

npm init

Just hit enter to get all the default values filled in automatically. You can always change them later. The package.json file has now been created for us.

And now comes a big, long command to install all the dependencies from our shopping list.

npm install --save-dev react react-dom webpack babel-loader babel-core babel-preset-es2015 babel-preset-react

If we look at our package.json file we can see that all these dependencies have now been added under devDependencies. And they have also been installed in your /node_modules/ directory.

The package.json file is a way for your application to remember all the dependencies it has. So you never need to commit your /node_modules/ directory because you can just run npm install, which will look at your package.json file to know which node modules it needs to install.

Next we need to create two folders—one for our source code and one for the processed version of the code that webpack will spit out. Let’s say the names for the folders are src/ and dist/ (i.e. source and distribution).

mkdir src dist

And now we need a file into which we can write our React code. It should live inside the src directory.

And let’s not forget the index.html, which does not need to be transpiled, so we can put it directly into the dist/ folder.

If you were following the commands, continue with:

touch src/app.js dist/index.html

Now let’s write some code! In the index.html, add the standard HTML boilerplate code and then, inside the body, add an element with an id and a script tag.


  <div id='app'></div>
  <script src="bundle.js"></script>

The script that we’re referencing here does not exist yet, but once we’ve got some React code, we will let webpack do its magic and automatically output a bundle.js file with our transpiled code.

And the React code will go into app.js.


import React from 'react'
import ReactDOM from 'react-dom'

class Main extends React.Component {
  render() {
    return (
        <h1>Hello World</h1>

const app = document.getElementById('app')
ReactDOM.render(<Main />, app)

And now let’s get to the pièce de résistance of our whole setup: the webpack config file!

touch webpack.config.js

As the bare minimum, it needs to export an object with three properties: entry, output, and loaders.

  • entry specifies the entry path to your application, which in our case is the app.js
  • output specifies the folder into which webpack will place the automatically generated file with the transpiled code
  • loaders is an array of tasks that webpack will carry out

Here is what the config file looks like:


const path = require('path')

const config = {
  entry: './src/app.js',
  output: {
    path: path.resolve(__dirname, 'dist'),
    filename: 'bundle.js'
  module: {
    rules: [
      { test: /\.js$/,
        loader: 'babel-loader',
        exclude: /node_modules/

module.exports = config

The test property in the loader object has nothing to do with setting up unit test frameworks, but is related to the unix test command. The test key's value specifies for which files the loader should be responsible. So when it goes through all the files, it first “tests” if it is the correct file before applying the processing.

A note about the output path: You can hardcode the path, but it is better practice to use Node.js’s path module. The path module's default operation varies based on the operating system that you are using to run the code. Any differences between Windows and Mac will be handled by this module, so that's one less thing for us to worry about.

As the last step, we need to tell the Babel loader that we want to use it to compile ES6 to ES5 and React to vanilla JS. We need to create a .babelrc for that with the following specifications.

// .babelrc

    "es2015", "react"

The last step is to add the webpack command to our package.json. Find the scripts object and add another key-value pair under the existing test key.

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  'build': 'webpack'

To transpile and bundle the code, we use the command:

npm run build

And now we can open the index.html in the browser and see our “Hello World” headline. All done!

Bonus material

I really do recommend installing the webpack-dev-server. It is just so convenient! Let’s install webpack as a development dependency:

npm install --save-dev webpack-dev-server

Now we can add the command to run the server to our package.json. Underneath the build command inside the scripts object, add another key-value pair. Let’s call the command watch.

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  "build": "webpack",
  "watch": "webpack-dev-server --content-base list"

To start the server, run

npm run watch

...and your code will run on localhost. Now make a change in your React Main module. See what I mean? Amazing!

Read the whole story
2 days ago
Akron, OH
2 days ago
West Grove, PA
Share this story

.NET Framework May 2017 Cumulative Quality Update for Windows 10


We just released a new Cumulative Quality Update for the .NET Framework. This update is specific to Windows 10 Creators Update.

Previously released security and quality updates are included in this release.

These quality updates will be made available to other .NET Framework and Windows version combinations in Q3 2017. You can read the .NET Framework Monthly Rollups Explained to learn more about how the .NET Framework is updated.


This release contains the security improvements included in the NET Framework May 2017 Security and Quality Rollup.

Quality and Reliability

The following improvements are included in this release.

Common Language Runtime

Issue 426799

Some applications using the managed C++ memcpy intrinsic could execute incorrectly. The JIT compiler when optimizing code that contains a C++ memcpy intrinsic could generate incorrect code.

Issue 426795

Some applications terminate with an A/V in clrjit.dll.The JIT compiler when optimizing a block of unreachable code could hit a A/V which results in the unexpected early termination of the application.

Windows Presentation Framework

Issue 373366

If two WPF applications that target Side by Side (SxS) .NET versions (3.5 and 4.X) are loaded in the same process issues can occur on touch/stylus enabled machines. A common example of this is loading VSTO add-ins written in WPF. This is due to an issue with choosing the correct PenIMC.dll version for each application. This fix allows WPF to properly differentiate between both DLLs and function correctly.

Getting the Update

The May 2017 Cumulative Quality Update is available via Windows Update, Windows Server Update Services and Microsoft Update Catalog.

Docker Images

The Windows ServerCore and .NET Framework images have not been updated for this release.

Downloading KBs from Microsoft Update Catalog

You can download patches from the table below. See .NET Framework Monthly Rollups Explained for an explanation on how to use this table to download patches from Microsoft Update Catalog.

Product Version Cumulative Quality Update KB
Windows 10 Creators Update Catalog
.NET Framework 4.7 4020102

Previous Monthly Rollups

The last few .NET Framework Monthly Rollups are listed below for your convenience:

Read the whole story
3 days ago
West Grove, PA
3 days ago
Akron, OH
Share this story
Next Page of Stories