20 Apr 2017 » Designer's Guide to Android by Jessica Moon and Yash Prabhu

Part 1 of Android Design Series

This document was orignally written for our design team at DramaFever.

There are a couple of terms and concepts to be aware of before you start designing an Android app, and in this guide we will be going over the basics. Keep in mind that this is just a general overview, so continue to ask your Android developer any follow up questions or concerns.

A. Getting Started

0. Density

Android’s unit of measurement is density-independent pixels or dp. These units preserve the many different screen densities uniformly. 1px = 1dp with a density of 160. To calculate dp, use the following formula:

dp = (width in pixels * 160) / screen density

Screen Density

High-density screens have more pixels per inch than low-density ones. As a result, UI elements (such as buttons) appear physically larger on low-density screens and smaller on high-density screens. In a mdpi layout, 1 pixel = 1 dp. Designing at a lower resolution will require less pixel nudging, so make sure you are always at mdpi. The ratios are as follows:

mdpi:hdpi:xhdpi:xxhdpi:xxxhdpi :: 2:3:4:6:8 (or 1:1.5:2:3:4)

Screen Size

Physical size of the device measured across its diagonal.

Font

Font sizes are not measured in dp but sp - scaleable pixels. As the Android design guidelines state, “The primary difference between sp and dp is that sp preserves a user’s font settings. Users who have larger text settings for accessibility will see the font size matched to their text size preferences.” It is recommended not to use sizes below 12sp - Material Design typography guidelines suggest 12sp as minimum size for captions in Typography Styles.

In Android, we export for several different sizes. We can automate that - this will be covered in Part 2. For now it’s something to keep in mind.

1. Grid

These are all excerpts from Material guidelines which you can read in more detail in Metric Keylines. We’ve extracted some key points for the purpose of understanding the basics.

  • All components align to an 8dp square baseline grid for mobile, tablet, and desktop.
  • Iconography in toolbars align to a 4dp square baseline grid.
  • Type aligns to a 4dp baseline grid.

The responsive grid focuses on consistent margin and gutter widths, rather than column width. Material design margins and columns follow an 8dp square baseline grid. Margins and gutters can be 8, 16, 24, or 40dp wide.

Margins and gutters don’t need to be equal. For example, it’s acceptable to use 40dp margins and 24dp gutters in the same layout.

More specific guidelines for keyline spacing can be found in Keylines Spacing.

Note: If you use Sketch, you can adjust the Layout grid can be set by going to View > Canvas > Layout Settings. Once values are set, click the “center” button in the Offset section to center the grid to your artboard.

2. Breakpoints

There are too many devices to account for – but if we define breakpoints and where the layout should adjust, it is helpful for our developers and for our designers to visualize how these designs look on different screen sizes and orientations.

Material design guidelines provides a list of common Android devices with their screen sizes, densities and resolutions at https://material.io/devices/. Using this, our development team has identified 2 breakpoints per device which equals 6 breakpoint total for what we define as small (S), medium (M), and large (L) layouts. We have established 6 different designs to support a range of Android devices. For us, 320dp is default if 720dp and 1024dp rules are not satisfied. Note that this is different from how Android defines small, normal, large and extra-large screen sizes in Screen Sizes.

360 720 1024

You can create a template of these and annotate where you can if there are specific changes that should happen to a particular component (card, button, text size, etc). Also include the device equivalent that your team frequently tests. It’s important to remember you are looking at a single layout and that does not account for the thousands of devices out there!

The more information we can provide means a more accurate representation of your designs translated into build with less tickets on tiny adjustments and a more efficient process.

3. Terminology

Android uses XML to define UI elements on a layout. All UI elements generally have a parent container like a LinearLayout to RelativeLayout. We can define UI element size in 3 ways - use wrap_content, match_parent or an exact number. This is important to understand as these properties can be used to fit an element across different devices, screen sizes and orientations.

match_parent

When we set layout width on a UI element to be match_parent, the element will fill the entire width of the parent layout minus the padding.

Note : If parent is applied a padding then that space would not be included. When we create layout.xml by default we have RelativeLayout as default parent View with android:layout_width="match_parent" and android:layout_height="match_parent" i.e it occupies the complete width and height of the mobile screen.

Also note that padding is applied to all sides if we use android:padding= “xdp”. If we want it applied to only one or more sides, you can use the following where x is a number that specifies the density independent pixel.

android:paddingBottom="xdp"
android:paddingLeft="xdp"
android:paddingRight="xdp"
android:paddingTop="xdp

wrap_content

If we set layout width or height as wrap_content, it will use space big enough for its contents to get enclosed plus the padding.

Now, let’s look at an example of how a button’s size is defined using these properties and how match_parent and wrap_content come into play.

Here’s the relevant code in XML:

<Button
   android:text="Title"
   android:textSize="20sp"
   android:layout_centerInParent="true"
   android:layout_height="wrap_content"/>

Let’s visualize this by adding a line of code: android:layout_width="match_parent"

match_parent 360 portrait match_parent 360 landscape

Notice that the button takes up the entire width of the parent and maintains the width even when the orientation changes from portrait to landscape.

Let’s visualize what happens when we change layout_width to wrap_content by adding this line of code: android:layout_width="wrap_content"

wrap_content 360 portrait wrap_content 360 landscape

Notice that the button wraps around the text and maintains width irrespective of the orientation.

B. Takeaways

We hope this helped you get started on designing an Android app. Please keep in mind, this is only a primer. It’s important to continue to research and discover what you aren’t sure of, and to always stay in communication with your developer. Stay tuned on our blog to see the Part 2 of our series!


08 Feb 2016 » Logging in Go by Paddy Foran

At DramaFever, we have logging requirements for our services, largely so we make life not-terrible for our Ops Team. We want to be able to configure where a log writes to, be able to configure what gets output from a log, and so on. For Python services, these can all be met just by logging the error; it’ll automatically use the configuration and get sent to our error reporting system, Sentry. However, things weren’t as smooth for our Go services. They had the extra task of having to manually send the Sentry error after every log output. That seemed like something we could fix.

Designing

To ensure the strategy would create useful output that included all of our Ops team’s requirements, I worked with Tim, our Director of Operations at the time.

The first step was to establish the reasoning behind our current log levels.

  • Debug is to be used by developers only. Ops should never see these messages.
  • Info is to be used to surface relevant information to Ops while the service is running but things are going according to plan. Most often, this means version information, ports listening on, etc.
  • Warn is to be used when something goes wrong but we can gracefully degrade. The request didn’t fail, but we should probably know something didn’t go the way we had hoped.
  • Error is to be used only when something went wrong that could not gracefully degrade. If the response code isn’t 400 or above, it’s not worthy of an error level message.

Beyond clarifying the semantics, we also hammered out some usability concerns. Timestamps that don’t use / or have the T are easier for Unix tools to process. Having the format keep the fixed-width stuff (the timestamp and log level) at the beginning of the line (before things like the filename of the line that raise the log) also helps when filtering or sorting.

Once we established that our output would make the Ops team happy and productive (which are always good goals for your Ops team!), I set about actually building the thing.

Building

The work done with the Ops team paid off: I had a clear understanding of what needed to be built, and knew what the output needed to look like. I had some good thoughts on how to get there. Building the actual logger wasn’t that hard. Create a type that has an io.Writer, a mutex, and the level you want to log at:

type Logger struct {
  level Level
  out io.Writer
  lock *sync.Mutex
}

type Level string

When logging, we’ll obtain a lock on the mutex, write to out, and release the lock. The lock helps us keep concurrency safety on a single file, and modeling as an io.Writer instead of an os.File helps us support writing to stdout or a buffer (when testing) or really anything else. Interfaces are great.

Then it’s a simple matter of defining helper methods to write to out. We want our typical Debug, Debugf, Info, Infof, etc. helpers that will write the error to the io.Writer only if they fit the level configured in the Logger. So if our Logger has a Level of info, Debug and Debugf will not print anything.

For the actual output, I cribbed from the standard library pretty heavily. We didn’t need or want the configurability of the output offered by the standard library, so our code is more concise. We also wanted to support helper functions; sometimes a service has its own logging requirements or helper functions, and we didn’t want those to swallow the line number of the true call that raised the log. So we included a call depth property on our Logger and ways to set it, so it will look to the call-that-called-it when deciding which file and line to attribute a log entry to.

Sentry

But of course, we wanted to do more than write to a file, or we would have used one of the numerous logging libraries available to us. The main feature of this library was supposed to be Sentry integration. To achieve that, we added an optional sentry property to our Logger type, containing a *raven.Client that we could then use to send log entries to Sentry. We had to add some helper functions, like AddTags and AddMeta to the Logger, so we could associate some metadata with log entries within Sentry. We also needed to add helper functions to set the package prefixes in Sentry that help determine whether an error is from our code or one of our dependencies, as well as the release metadata that helps us understand which release(s) an error is present in. Those are all pretty straightforward: set a property, send it along with the log entry when reporting the error.

It’s reporting the error that’s actually interesting.

To do that, I had to have a helper method that sent some data to Sentry. Then I could just call the helper method within Warn, Warnf, Error, and Errorf. Right? Sort of.

Sentry isn’t as simple as “I’ll record a line of text”. It has a more complex, deep understanding of the data it’s working with. It can display HTTP requests in useful ways. It gathers stack traces for errors. Things like that. But problematically, there isn’t always an error type variable available when we reach an error condition. And some errors aren’t related to an HTTP request.

So I needed to figure out how to include those things sensibly, but not require them.

Our logging functions have a signature like this:

func Errorf(format string, args ...interface{})
func Error(args ...interface{})

You’ll recognise these as similar to the fmt package’s functions. But by using type assertions, we can check whether those interfaces are *http.Request or error instances, and take further action. For example, when it’s an error, we capture a stack trace and log it as an exception in Sentry. When it’s an *http.Request we log it as a request in Sentry, with all the assorted metadata that comes with that.

But calling is still simple and natural:

func HandleRequest(w http.ResponseWriter, r *http.Request) {
  err := doAThing()
  if err != nil {
    log.Errorf("Error doing a thing for %s: %+v", r, err)
  }
}

Context

There is, unfortunately, a downside: any meta information you want to include with the Sentry error will get written to your log file, and it’s sometimes not incredibly helpful to have a log file filled with the exploded output of an HTTP request on every line. In theory, the AddMeta helper exists to fill this need:

err := doAThing()
if err != nil {
  log.AddMeta(r).Errorf("Error doing a thing: %+v", err)
}

But that’s not as nice, and still isn’t the developer experience we’re aiming for.

Our solution was a middleware pattern that leans heavily on context.Context. We use helpers to embed our Logger instances in the context.Context, then helper methods to retrieve them. These helpers are built into the logging package itself, so everyone’s using the same set/get code for Logger instances. This has the nice side effect of allowing our middleware to be aware of the Loggers. So we can, for example, write the following:

type ContextHandler func(context.Context, http.ResponseWriter, *http.Request)

func SentryRequestMiddleware(h ContextHandler) ContextHandler {
  return func(ctx context.Context, w http.ResponseWriter, r *http.Request) {
    logger := logging.LogFromContext(ctx).AddMeta(r)
    ctx = logging.SaveToContext(logger, ctx)
    h(ctx, w, r)
  }
}

Then every context.Context-aware handler we wrap in that SentryRequestMiddleware function will automatically associate the *http.Request that was being processed with any errors raised while processing that request.

The ability to pass our Logger instance around using context.Context has proven invaluable for having reusable code that still outputs useful logs.

Copying

We’re able to pass the Logger around using context.Context and assign request-specific variables to it because we’re using copying heavily in the Logger API. We purposefully and defensively copy a lot when using the Logger. You may have noticed above that AddMeta returns a Logger, instead of mutating the Logger in place; that’s because we wanted to be clear about the behaviour of the Logger in relation to the context.Context:

Any change you make to a Logger will only be reflected in calls that are passed that new instance. The old instance will continue to behave as before.

So, for example:

log1 := logging.LogFromContext(ctx)
log2 := log1.AddMeta(r)

log1.Error("Something bad happened")
log2.Warn("Eveerything's not shiny")

The Sentry error generated from log1 above will not have the *http.Request associated with it. But the error generated from log2 will. This allows us to make Logger instances that are specific to users or requests without fear of accidentally stepping all over another request’s data.

Open Source

We’re pretty happy with the things our logging package allows us to do. And because we thought it was silly to pay for this repo to be private love open source, we’ve released it under an MIT license. You can find it on our GitHub. Issues and pull requests are welcome, but we use it as a tool for our specific needs, first and foremost. If you have different needs, hard forks are encouraged! We also make no promises about backwards compatibility on the master branch. We use vendoring or version pinning internally to manage this, and encourage you to do the same if you’re using this library.


05 Feb 2016 » How Healthy Is Your Team? by Victoria Marinucci

How Healthy Is Your Team?

What would you learn about a team if given a setting that would trigger honest observation and constructive discussion of a team’s current health? That’s where a Team Health Check can help reveal these insights and spark discussion.

Health Check, you say? The very mention might make a “tense” team cringe and a “happy” team shrug. The latter has their problems figured out after all.

But, in a dynamic environment like that at DramaFever, priorities change week-by-week, team situations are ever-changing as well. Sometimes team developments are obvious or subtle; some are positive and other shifts are problematic. Other times changes impact one person on a team instead of the whole. You get the picture: a company change begets interteam shifts with a variety of consequences. But, with so many potentials (and busy teams), how can identification and resolution be done productively? No easy magic exists, but there are a multitude of options that set the tone for efficiently spotting these problems known as “Team Health Checks”. All it requires is a hour of a team’s time on a quarterly basis, a good facilitator and some willing participants.

This sounds all fine and dandy, but the real test is giving it a go. And, that’s exactly what a few teams have done at DramaFever, and the results were interesting.

Here are some examples.

Experiment Run #1: Project Team

This quarter, the Project Team has been experimenting with Health Checks - specifically a Barometer Health Check, which enables a team to identify their strengths, weakness and hazy areas and feel empowered to discuss them in a candid, yet blameless environment.

The Barometer Health Check works identifies 16 crucial team characteristics (trust, collaboration and mutual respect to name a few). Attendees have a set of cards representing each, including a positive (green) statement about that characteristic and a negative version (red) of that statement. A facilitator is present to execute the exercise without adding context (that is the team’s job), tactfully guide the conversation and keep participants on topic and timebox. The team votes on the health of each characteristic, an average of how healthy each characteristic (and the overall team health) is determined and most importantly - closing discussion on what they discover is had.

When the Project Team put this to practice (I facilitated the Health Check), I have to admit the results weren’t what I expected. I viewed my team as one of those aforementioned “happy” and engaged teams - healthy!

The results of the Check were otherwise. We received a 77%! Say what?

Though many pain points were due to being a newly established team, it still was a surprising score. Holistically, we did build our communication lines and honesty as a result of this meeting. It also established monthly team-building exercises for us to improve our score, in addition to another Q1 Health Check to see how we do!

image

Experiment Run #2: The Gophers

I also facilitated a Health Check with our Gopher Team and the reviews were quite good, except for one item - my tracking of votes per characteristic via sticky note was distracting(Clearly, someone just wants sticky-note duty!).

All in all, the Check resulted in some open conversation and tangible action items.

Wrapping Up

All things said, understanding the health of your team members is imperative, especially as big changes develop in a company. It’s not always simple to pull that information out of certain people; but methods like Health Checks provide an empowering environment to explore just that with a simple and direct exercise and a considerate timebox.


27 Jan 2016 » Stop Running Out of Memory by Chris Agocs

Motivation

When I started here at DramaFever, I inherited a little Go service that resizes and crops images dynamically for display across a variety of devices. Nothing too complex or too critical of a path, but still, a nice feature to have. You give the service the URL to an image and some transformations, and it applies those transformations to that image and returns to you the transformed image. It’s useful for dynamically cropping and scaling images for mobile / tablet / desktop devices. It’s a quiet service that hums along without bothering anyone. That is, until… well, read the error log:

fatal error: runtime: out of memory

During periods of high usage, my image service would eat up all the memory on the machine and crash. Fortunately, it’s Dockerized, so recovery is pretty easy – the container just restarts and processing picks back up. The magic of load balancing keeps it from affecting too many people. Only the users for which the service was processing requests when it bounced get their requests dropped on the floor. That’s not super terrible, but we can still do better.

We came up with a deceptively simple solution: just don’t run out of memory. If the service is about to run out of memory and it gets another request, it just doesn’t handle that request.

The Plan

In order do this, we need to know two things:

  • The amount of memory we’re using
  • The amount of memory available to us

Then we need to make a determination. If we’re using, e.g. 90% of the memory available to us, we simply respond with a 500 status code. We do that until our garbage collector has a chance to run, and our memory usage drops.

Allocated Memory

Surprisingly, how much memory we’re using is easier to find. The Go Team generously provided a runtime package that contains a MemStats struct, and a ReadMemStats func. ReadMemStats populates the MemStats struct. Then we examine MemStats.Alloc to see the number of bytes allocated and not yet freed.

It is well worth noting that ReadMemStats is a stop-the-world event, so you should be careful not to call it too frequently (where defining “too frequently” is left as an exercise to the reader).

Available Memory

However, determining how much memory we have available to us is a completely different matter. Go gives us the (poorly documented) syscall package. Syscall gives us a Rlimit struct and a GetRLimit func. The value stored in Rlimit.Max is the same as the value given if you were to run $ ulimit -m from your command line, telling us the maximum resident set size (chunk of memory that can be allocated by a process).

Unfortunately, “unlimited” is a perfectly valid value for ulimit -m, in which case our Rlimit.Max is 18,446,744,073,709,551,615. We need to know how many bytes of memory are in the machine itself. The straightforward but tedious steps to do this are:

  • Open /proc/meminfo
  • Read the contents of /proc/meminfo
  • Close /proc/meminfo
  • Match the line with the string “MemTotal:”
  • Split that line on the space character
  • Get the second value from the end (this will be the total installed memory in kB)
  • Convert that string to an unsigned 64-bit integer
  • Multiply that value by 1024.

Of course, that could go wrong in a bunch of different ways. Let’s ignore them all for the sake of brevity.

So, if we’re lucky, we now know how much memory is installed in the machine (let’s also assume that the OS and other processes take up a small percentage of that, and that our microservice can eat the lion’s share), and the maximum amount of memory the OS will allow us to allocate. Our available memory is the smaller of those two numbers: we’ll OOM if we try to allocate more memory than that.

In Summary

When we start our program, we determine our available memory. At a large granularity (so as to avoid stopping the world with unnecessary frequency), we determine our allocated memory. When our allocated memory gets within a fixed percentage (we used 90%) of our available memory, we stop processing new HTTP requests until some old requests finish and our garbage collector has had a chance to run.

That’s a huge pain in the butt, so we wrote a library to do it for us: https://github.com/DramaFever/memMinder