Cloud-native developer. Distributed systems wannabe. DevOps and continuous delivery. 10x troublemaker. DevOps Manager at VHT.
12429 stories
·
3 followers

Announcing Improved User Management for Terraform Cloud

1 Share

Today we’re announcing improvements to the way you invite and manage users in Terraform Cloud. You can now invite new users to your organization by sending them an email from within the web UI and the users can accept or reject that invite. You can also view a list of all users that are part of an organization along with which teams they belong to, in a single, unified view. In addition, you can search for users in your organization by email address or username. You’ll find these new features on the new Users page under organization settings.

Inviting Users

To invite a user to Terraform Cloud, navigate to the organization settings, Users page and click on “Invite a user”. Then enter that user’s email address and select the team(s) you would like them to join.

The user will be sent an email and be added to the “Invited” list in the Users table:

If you accidentally invited the wrong user or entered an incorrect email address, you can remove the user by clicking on the edit menu. Doing so will invalidate the incorrect invitation:

Users that click through from their email invitations, will be shown all their pending invitations when they log into Terraform Cloud and will have the option to accept or decline them:

The invitation workflow described above is consistent for all users, regardless of whether they have Terraform Cloud accounts already or are new and must create accounts.

User Management

Users that belong to multiple organizations can choose which organization they would like to view. This page is also viewable by clicking the Terraform logo on the top left of the screen:

To get a quick count of the total number of users in the system (which is particularly useful if you’re on our paid tiers which are priced based on number of users), just head over to the Users page:

From that page, you can also easily search for a specific user if you have a large list of users or easily remove a user from your organization. Removing a user from all teams in an organization will revoke all access to that organization for that user.

Conclusion

We are excited by the interest in and adoption of Terraform Cloud since the launch in September. The new user management features are borne out of the feedback we have received from many of you and are all available in the Free Tier of Terraform Cloud. To create additional teams, you will need to upgrade to the Team tier of Terraform Cloud. You can sign up for a Terraform Cloud account here and upgrade to the paid tiers from within the product itself.

Read the whole story
sbanwart
1 hour ago
reply
Akron, OH
Share this story
Delete

Systemd-homed Looks Like It Will Merged Soon For systemd 245

1 Share
Announced back in September at the All Systems Go event in Berlin was systemd-homed as a new effort to improve home directory handling. Systemd-homed wants to make it easier to migrate home directories, ensure all user data is self-contained, unify user-password and encryption handling, and provide other modern takes on home/user directory functionality. That code is expected to soon land in systemd...
Read the whole story
sbanwart
5 hours ago
reply
Akron, OH
Share this story
Delete

Debugging AccessDenied in AWS IAM

1 Share
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Ugh… that looks like it could be the start of a two hour or two week long goose chase.

Understanding why access was denied and implementing a secure solution can be complicated. Sometimes it’s not even clear where to start and what to do when you get stuck.

Here’s how I usually approach debugging AWS access control problems, a specialized form of The Debugging Rules:

  • Read logs, guess, and check by using application
  • CloudTrail
  • Finer-grained test events
  • Policy Simulator
  • Tool to test the app activity in a quick, repeatable fashion

I used this approach to dive into and climb out of a deep access control rabbit hole of cross-account access involving: IAM, Lambda, S3 bucket policy, and KMS encryption key policy.

Secure Inbox Pattern

My use case is nailing down the security for a ‘Secure Inbox’ pattern, where:

  1. a service in Account A (orange), does work on behalf of a customer
  2. the service must deliver data to a customer-managed S3 bucket in Account B
  3. the data must be encrypted with a customer-managed KMS key in Account B

I’ll share details of how to implement this pattern soon, but wanted to share insights of the debugging process while they are still fresh.

The critical API actions are s3:PutObject to an ‘internal’ S3 bucket managed by the service and s3:CopyObject to deliver the object to the customer. Both actions use the customer-managed key to encrypt the customer’s data and keep them in control of it.

Read logs, guess, and check by using application

Let’s start with the fact that I’m primarily in application development mode, developing an application. Secondarily, I’m creating some CloudFormation templates that customers will be able to use to configure resources in their accounts.

My mindset isn’t “Oh, I’m looking forward to digging deep into IAM, S3, and KMS policy!”

So, I try to solve the problem using what’s at hand: reading the AWS SDK (boto3) and S3 API docs, AWS security policy docs, S3 API responses, and application log messages logged into CloudWatch Logs.

I also thought I started off pretty close to the target.

The s3:PutObject action to internal storage is simple enough:

# _s3_client is a boto3.client('s3')
response = self._s3_client.put_object(ACL='private',
                                      ServerSideEncryption='aws:kms',
                                      SSEKMSKeyId=kms_encryption_key_id,
                                      Bucket=bucket_name,
                                      Key=key,
                                      Body=body_bytes)

The ‘obvious’ part is to specify server-side encryption withaws:kms and the customer’s KMS encryption key ARN with the S3 PUT API action.

AWS KMS provides customer managed encryption keys and an api. The really neat thing about the KMS API is that you can Allow use of:

  • particular api actions like kms:Encrypt and kms:GenerateDataKey
  • for particular encryption keys
  • for particular AWS principals: IAM roles and users or entire AWS accounts

But… what you don’t see and isn’t documented directly in the s3:PutObject API docs are the specific permissions you need for the KMS key policy in order to PUT the object. To be fair, the KMS policy generated by the AWS wizard when you provision a key would allow the necessary actions, but I think that policy is a bit broad.

Further, from the application perspective, if you you go ahead and catch the ClientError from the s3 client’s put_object method, and print out the error response, you’ll still only see something like:

Unexpected error putting object into bucket: { 
  'Error': {
    'Code': 'AccessDenied',
    'Message': 'Access Denied'
  },
... snip ...
}

Still nothing more than AccessDenied. AWS doesn’t give detailed information about how to debug permissions problems in API responses.

This is really important to understand if you’re a security or platform engineer — application engineers often cannot debug permissions problems from their application code, even if they want to.

We need a better tool for understanding the system. The next tool I use is CloudTrail.

CloudTrail

CloudTrail is an AWS service that provides an audit log of important events that occur in your account. The logs, called trails, record most AWS API usage including important request parameters and the principal (user or role) they were executed with.

If you have CloudTrail enabled in your account (you definitely should) and access to view the trail, you may be able to find valuable clues as to why access was denied, and which object it was denied to.

Here’s an example of a CloudTrail event in the service account A that told me why I couldn’t store an object in S3:

Getting closer! I found an event with an Error Code of AccessDenied. The skuenzli user was denied access to use the kms:GenerateDataKey api. Now, I’m running through this example from my laptop with nearly Admin permissions. I know I have access to invoke kms:GenerateDataKey. The real issue is hidden inside the errorMessage of the event:

{
    "eventVersion": "1.05",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "AIDAJREII7F7Q2K7QMCLE",
        "arn": "arn:aws:iam::account_A:user/skuenzli",
        "accountId": "account_A",
        "accessKeyId": "ASIAJVOBWCCQR3OTPSUA",
        "userName": "skuenzli",
        "sessionContext": {
            "sessionIssuer": {},
            "webIdFederationData": {},
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "2019-12-03T15:55:35Z"
            }
        },
        "invokedBy": "AWS Internal"
    },
    "eventTime": "2019-12-03T15:55:35Z",
    "eventSource": "kms.amazonaws.com",
    "eventName": "GenerateDataKey",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "AWS Internal",
    "userAgent": "AWS Internal",
    "errorCode": "AccessDenied",
    "errorMessage": "User: arn:aws:iam::account_A:user/skuenzli is not authorized to perform: kms:GenerateDataKey on resource: arn:aws:kms:us-east-1:account_B:key/e9d04e90-8148-45fe-9a75-411650eea80f",
    "requestParameters": null,
    "responseElements": null,
    "requestID": "722abc0e-77c2-42e0-8448-4f0469420f3a",
    "eventID": "92edcefe-6d12-4357-85f4-20f709f3e413",
    "readOnly": true,
    "eventType": "AwsApiCall",
    "recipientAccountId": "account_A"
}

Aha:

"User: arn:aws:iam::account_A:user/skuenzli is not authorized to perform: kms:GenerateDataKey on resource: arn:aws:kms:us-east-1:account_B:key/e9d04e90-8148-45fe-9a75-411650eea80f"

I’m not permitted to invoke kms:GenerateDataKey with Account B’s encryption key. This is what I was really being denied access to.

Note that the s3:PutObject action invoked kms:GenerateDataKey on my behalf. s3:PutObject will do the same for kms:EncryptData.

Time to update the KMS encryption key policy.

It’s also time for a new tactic: finer-grained test events

Fine-grained test events

You may start off investigating and debugging access problems like this through the application.

But if this process is expensive, consider creating ‘fine-grained’ integration tests or Lambda test events that isolate one aspect of the integration. My application normally takes several minutes to run, which is certainly expensive to me. So it was time to focus in on the access control problem.

In the Secure Inbox use case, there are two integrations at work:

  1. store the object in s3
  2. copy the object

I created a number of integration tests that verified the expected behavior of key processes, including those above as part of the normal development process. And those tests were and still are very useful.

So why was I having trouble?

Well, those tests were all operating with resources and roles within a single AWS account I use for development. And cross-account access is very different.

The full cross-account test setup was planned and near the top of the backlog, but we hadn’t done it yet. A realistic multiple account setup is critical to testing cross-account scenarios accurately and quickly. So, we knocked that out and got back on track.

Once you have a quick and easy way to ‘make it fail,’ you can iterate and learn much quicker.

Policy simulator

Possibly the quickest IAM testing tool of all is to use the IAM policy simulator to help you narrow in on the IAM policy. No application deployments needed!

With the policy simulator you can simulate AWS API actions with all of the contextual information we’ve been talking about here:

  • the actual IAM user or role
  • current or proposed policies
  • one or more api actions, like s3:PutObject
  • specific resources: buckets, KMS keys and their policies

The simulator will tell you if an action is allowed and tell you which policy allowed it. The simulator also provides basic diagnostic information about why an action was not permitted.

That said, the simulator is a little clunky to use. You may find this tutorial on Testing an S3 policy using the IAM simulator a helpful introduction to the mechanics.

If your application and access control problems are bounded by a single account, these debugging tools and approaches may be sufficient.

If you’re integrating applications across AWS accounts, I think you’ll need a cross-account integration check.

Cross-Account Integration Check

If you have a cross-account integration scenario, I recommend a quick check of that integration.

Once you think you have the plethora of resources and policies in place and are using the right principals, it’s time to verify that it all works together.

Combine the fine-grained test events and actions into a function or end-to-end functional test that checks all of the cross-account integration in one go.

I created a quick-and-focused customer_configuration_check Lambda function that exercises the minimal path through all the integration to create and deliver a test (report) object. This function can be invoked with test events in AWS or locally and executes in less than 10 seconds. This tool’s super quick and accurate feedback has proved extremely valuable in application development.

This customer_configuration_check function will have lasting value to our customers. We will use this tool to check customers’ configurations as we onboard them in addition to developing and testing new policy configurations.

This is one of the ways you “DevOps” in a Serverless world. Through this IAM debugging exercise we’ve shortened feedback loops, improved daily work, and repositioned to deliver value to customers quicker and with greater reliability.

Stephen

#NoDrama

Read the whole story
sbanwart
6 hours ago
reply
Akron, OH
Share this story
Delete

Dew Drop – December 6, 2019 (#3087)

1 Share

Top Links

Web & Cloud Development

XAML, UWP & Xamarin

Visual Studio & .NET

Design, Methodology & Testing

Mobile, IoT & Game Development

Podcasts, Screencasts & Videos

Community & Events

Database

Miscellaneous

More Link Collections

The Geek Shelf

 Learn Visual Basic : 2019 Edition (Philip Conrod & Lou Tylee) – Referral Link

Read the whole story
sbanwart
6 hours ago
reply
Akron, OH
Share this story
Delete

Benchmarking spreadsheet systems

2 Shares

Benchmarking spreadsheet systems Rahman et al., Preprint

A recent TwThread drew my attention to this pre-print paper. When spreadsheets were originally conceived, data and formula were input by hand and so everything operated at human scale. Increasingly we’re dealing with larger and larger datasets — for example, data imported via csv files — and spreadsheets are creaking. I’m certainly familiar with the sinking feeling on realising I’ve accidentally asked a spreadsheet to open up a file with 10s of thousands of rows, and that my computer is now going to be locked up for an age. Rahman et al. construct a set of benchmarks to try and understand what might be going on under the covers in Microsoft Excel, Google Sheets, and LibreOffice Calc.

Spreadsheets claim to support pretty large datasets these days – e.g. five million cells for Google Sheets, and even more than that for Excel. But in practice, they struggle at sizes well below this.

With increasing data sizes… spreadsheets have started to break down to the point of being unusable, displaying a number of scalability problems. They often freeze during computation, and are unable to import datasets well below the size limits posed by current spreadsheet systems.

While a database is barely getting started at 20,000 rows, a spreadsheet could be hanging. What learnings from the database community could help improve spreadsheet performance?

Two benchmark suites help generate insights into this question:

  • The Basic Complexity Testing (BCT) suite assesses the time complexity of basic operations over a range of data sizes. The goal is to understand whether, for example, response time is constant, linear, or worse. Given this information, it’s possible to figure out when a spreadsheet ceases to be interactive, defined as failing to respond within 500ms.
  • The Optimization Opportunities Testing (OOT) suite is carefully designed to probe whether common database techniques such as indexes and incremental updates are employed, and hence to spot opportunities for improvement.

The TL;DR summary is that spreadsheets could be a lot better than they are today when operating on larger datasets.

First, we need some spreadsheets

The seed spreadsheet for the analysis in the paper was a weather data spreadsheet containing 50,000 rows and 17 columns (repeating the experiments with other typical spreadsheet datasets did not yield any new insights). First the seed spreadsheet was scaled up 10x to yield 500,000 rows, and then variations were created with different sampling rates (row counts) and mix of formula-cells to simple value cells.

Basic complexity testing

The basic spreadsheet operations are summarised in the table below, and divided into three groups: load, update, and query.

Simple data loading is linear in spreadsheet size for Excel and Calc. Google Sheets implements lazy loading for value cells that are outside of the current viewport giving a more constant load time, but doesn’t do so for formula-cells.

With formula-value datasets, Excel, Calc, and Google Sheets fail to meet the interactivity barrier at just 6000, 150(!) and 150 rows respectively.

In a similar vein, sorting causes problems on very small datasets (less than 10K rows):

Sorting triggers formula recomputation that is often unnecessary and can take an unusually long time.

Conditional formatting breaks at around 80K rows.

For query operations the authors study filtering (selecting matching rows), pivoting, aggregation (e.g. count) and lookups (VLOOKUP).

Filtering takes a suspiciously long time for Formula-Value for Excel, violating interactivity at 40K row, possibly due to formula recomputation. The other systems avoid this recomputation, but are slower than Excel for value-only datasets.

With pivot tables there’s a surprise result with Calc being 6x faster than Excel (and 15x faster than Google Sheets). However, Calc (and Google Sheets) both do comparatively poorly on aggregation.

For VLOOKUPs both Calc and Google Sheets seem to implement the equivalent of a full table scan (not even stopping early once the value has been found!). Excel is more efficient if the data is sorted.

All told, it’s a sorry scorecard. The follow table summarises how close to their respective documented scalability limit each spreadsheet can get for the different kinds of basic operations. Finally a metric that penalises marketing departments for exaggeration!

Optimisation opportunities testing

The optimisation opportunities benchmarks are designed to probe whether or not spreadsheets maintain and use indexes (no evidence was found to suggest that they do); whether they employ a columnar data layout to improve computation performance (they don’t); whether they exploit shared computation opportunities through common sub-expressions in formulas (they don’t); whether they can detect and avoid entirely redundant computation in duplicate formulae (they don’t); and whether or not they take advantage of incremental update opportunities. Can you guess? They don’t.

Towards faster spreadsheets

Many ideas from databases could potentially be employed to make spreadsheets more scalable. One of the easiest to employ is incremental update for aggregation operations (e.g. the SUM of a column can be update based on the previous sum and a delta). More aggressively, a database backend could be used behind a spreadsheet interface, translating formulae into SQL queries. Going further, approximate query processing could also help meet interactivity targets

Overall, there is a plethora of interesting and challenging research directions in making spreadsheet systems more effective at handling large datasets. We believe our evaluation and the resulting insights can benefit spreadsheet system development in the future, and also provide a starting point for database researchers to contribute to the emergent discipline of spreadsheet computation optimization.



Read the whole story
sbanwart
8 hours ago
reply
Akron, OH
Share this story
Delete

[NEW TALK] Automated Testing for Terraform, Docker, Packer, Kubernetes, and More

1 Share

I’m happy to share with you the video and slides from my QCon talk on how to test infrastructure code! This talk is a step-by-step, live-coding class on how to write automated tests for infrastructure code, including the code you write for use with tools such as Terraform, Kubernetes, Docker, and Packer. Topics include unit tests, integration tests, end-to-end tests, test parallelism, retries, error handling, static analysis, and more.

You can find the video and slides on InfoQ:

Automated Testing for Terraform, Docker, Packer, Kubernetes, and More

Your entire infrastructure. Defined as code. In about a day. Gruntwork.io.


[NEW TALK] Automated Testing for Terraform, Docker, Packer, Kubernetes, and More was originally published in Gruntwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
sbanwart
8 hours ago
reply
Akron, OH
Share this story
Delete
Next Page of Stories