Testing Time

March 6, 2013 2 comments

Ever had problems testing a class that used the static methods of System.DateTime to check times?

It’s a common problem, and makes it hard to test a lot of code with tight coupling to this admittedly handy class.

Your first inclination may be to wrap a class around System.DateTime and inject it into every class that needs to know about the time, but this is such a common thing to check agains that you’ll have to inject it all over the place. Sure, with a proper IOC -container that may not seem too bad, and it’s better than going directly against System.DateTime – but today Michael Smith showed me a better way at the NNUG Vestfold meeting.

What you do is create a static class, he called it SystemClock, with a method GetTime to get the current time. How does that help, it’ll just make your tests depend on a static class, you might say, but the stroke of genius is to give it a second method – SetTime, and make it return an IDisposable. When that method’s called SystemClock stops delegating to System.DateTime.Now and starts returning the set time. When Dispose is called SystemClock goes back to delegating to System.DateTime! You can even implement a stack to make the usings nestable!

You change all your usage of DateTime.Now to SystemClock.GetTime and in your tests you just set up the SystemClock to return the time you want inside a using-block, and Bob’s your uncle!

Here’s my simple implementation of the class SystemClock, based on Michael’s description:

using System;
using System.Collections.Generic;

namespace SystemClock
{
    public static class SystemClock
    {
        static readonly Stack<DateTime> _setTimes = new Stack<DateTime>();
        static readonly Popper _disposable = new Popper();

        public static DateTime GetTime()
        {
            return _setTimes.Count > 0
                       ? _setTimes.Peek()
                       : DateTime.Now;
        }

        public static IDisposable SetTime(DateTime setTo)
        {
            _setTimes.Push(setTo);
            return _disposable;
        }

        private class Popper : IDisposable
        {
            public void Dispose()
            {
                _setTimes.Pop();
            }
        }
    }
}

And here are some specs to show it in use:

using System;
using Machine.Specifications;

namespace SystemClock.Specs
{
    [Subject(typeof (SystemClock))]
    public class when_retrieving_an_unset_time
    {
        Establish the_expected_time = () =>
        {
            expected_time = DateTime.Now;
            within_a_millisecond = new TimeSpan(0, 0, 0, 1);
        };

        Because the_time_is_gotten = () => current_time = SystemClock.GetTime();

        It should_return_the_expected_current_time =
            () => current_time.ShouldBeCloseTo(expected_time, within_a_millisecond);

        static DateTime current_time;
        static DateTime expected_time;
        static TimeSpan within_a_millisecond;
    }

    [Subject(typeof (SystemClock))]
    public class when_retrieving_a_set_time
    {
        Establish the_expected_times = () =>
        {
            expected_time_within_the_using
                = new DateTime(2000, 1, 1, 0, 13, 45);
            expected_time_inside_the_nested_using
                = new DateTime(1990, 1, 1, 14, 45, 58);
            expected_time_outside_the_using = DateTime.Now;

            within_a_millisecond = new TimeSpan(0, 0, 0, 1);
            exactly = TimeSpan.Zero;
        };

        Because the_time_is_gotten_outside_and_inside_a_set_time_block = () =>
        {
            time_before_the_using = SystemClock.GetTime();
            using (SystemClock.SetTime(expected_time_within_the_using))
            {
                time_within_the_using = SystemClock.GetTime();
                using (SystemClock.SetTime(expected_time_inside_the_nested_using))
                {
                    time_within_the_nested_using = SystemClock.GetTime();
                }
                time_within_the_using_after_inner_nesting = SystemClock.GetTime();
            }
            time_after_the_using = SystemClock.GetTime();
        };

        It should_return_the_current_time_before_the_set_time_block =
            () => time_before_the_using
                .ShouldBeCloseTo(expected_time_outside_the_using, within_a_millisecond);

        It should_return_the_expected_time_within_the_using =
            () => time_within_the_using
                .ShouldEqual(expected_time_within_the_using);

        It should_return_the_expected_time_within_the_nested_using =
            () =>
            time_within_the_nested_using
            .ShouldEqual(expected_time_inside_the_nested_using);

        It should_return_the_expected_time_within_the_using_after_the_inner_nesting =
            () => time_within_the_using_after_inner_nesting
                .ShouldEqual(expected_time_within_the_using);

        It should_return_the_current_time_after_the_set_time_block =
            () => time_after_the_using
                .ShouldBeCloseTo(expected_time_outside_the_using, within_a_millisecond);

        static DateTime time_within_the_using;
        static DateTime expected_time_within_the_using;
        static TimeSpan within_a_millisecond;
        static DateTime expected_time_outside_the_using;
        static DateTime time_before_the_using;
        static DateTime time_after_the_using;
        static DateTime expected_time_inside_the_nested_using;
        static DateTime time_within_the_nested_using;
        static TimeSpan exactly;
        static DateTime time_within_the_using_after_inner_nesting;
    }
}

I hope this helps you in your testing!

Advertisements

Legs

March 5, 2013 Leave a comment

A: We’re going to have to build a robot.
B: What?
A: We are here (points at a crude map) – we have to go here (points at another point on the map). We’ll build a robot to get there.
B: Cool, robots are awesome! Do you know anything about robots? I’ve never built one, but I’ve seen them on TV.
A: Me too, but I know the cool ones’ve got legs – and legs move them around – so we’ll build one to get us from here to there.
B: I’m sold, let’s do it!
C: Hey, you know, legs are kinda inefficient.
A: Huh? Where did you come from? What do you mean inefficient? I have legs, and they take me everywhere no problem.
C: Yeah, but think about it – every time you move one forward the other one gets left behind. You have to pull it back underneath you before you can even start gaining any distance in the next step.
B: Yeah, that’s true, but if we build really long legs we can make a lot of progress in few steps, and long-legged animals are fast, so we’ll get there fast.
A: Fast is good, and that’ll mean we’ll have fewer inefficient back-step-thingies – right?
C: Really?
B: Yeah, it’s obvious, longer steps will be efficienter!
C: Are you sure? It sounds so right, but feels wrong somehow.
A: Sure we’re sure – I’ve seen pictures of animals with long legs, and they look really fast.
B: And gracefull, they’re gracefull.
C: OK, just a few, long steps’ll get us over there. Let’s build the robot!
A: Yeah!
B: Awesome!

Later

C: Well, that didn’t go too well, did it?
B: Who’d have imagined – it seemed like such a great plan.
A: Yeah, it’s the guys who drew the map’s fault – we can’t be faulted for getting the wrong target.
B: Damned well can’t!
C: The map we got didn’t really fit the terrain very well at all.
A: You’re right! We never even saw that pond, it wasn’t marked off at all!
C: No, but they did say they wanted to get from here to there, not that they knew all the terrain in-between.
B: There you go! Stands to reason, we can’t get the robot to walk across unknown terrain with not knowledge of it! It’s ludicrous!
A: Yeah, we’ll have to do more research the next time – on the terrain and whatnot – ponds and stuff.
B: Silly, really, placing a pond just in the middle of a big, flat field.
C: So – what do we do now?
A: Well, they want us to move from here over there now.
B: Any ponds in between here and there?
A: No, no ponds in between – they’re quite sure. They’ve used a lot of money and gotten someone who knows the area – and he assures us it’s very dry.
B: Excellent!
C: No mountains either?
A: Ah, good point – I’ll check … No, he tells me it’s reasonably flat and very dry, and a bit hot.
B: Is hot a problem?
C: No, I don’t think so – we’ll just add a bit of cooling to the robot.
A: Terrific! Now, let’s get the robot turned around, add the cooling and be off!

Later

B: Well, did you tell them that’s not what they asked for?
A: Yes, I did, and they say they know – but we’re still in a bad place, and complaining won’t help. We’ll just have to change our direction.
B: But, the robot was never designed to change direction once we’ve gotten it going – that’s the whole point of the long legs! We have long legs and know the terrain, we’ll get there quickly and safely!
C: We’ll get to the wrong place quickly, you mean.
B: It’s not the wrong place, it’s exactly where they told us to go!
A: I know, but that was before we knew how hot it was!
C: And, there’s no water anywhere. Our cooling-system runs on water.
B: Argh! Well, just don’t let them blame us for this – we only did what we were told.
A: Just turn this thing around as soon as possible – we have to get out of this desert-thingy and over to the new there.
B: Yeah, yeah, I’ll try to shut it down. I just hope we can get it started back up, it’s never been designed for this, you know!
A: I know, but if we keep going it won’t be able to get us back out of this desert – and then how are we going to cope?
B: Very well, but it’s not what they specified.
C: We know. I’m sure they’re very sorry. Now shut it down before we end up even deeper into this dessication!

Later

A: Well done, lads, we’re out of it. It seems we might be able to continue on after all.
B: I’m not sure I want to.
A: What do you mean?
B: We build a great, efficient robot – with long, gracefull legs and they let us down time and time again.
A: Yeah, they said they were sorry about that one.
B: Not sorry enough, and now they want us to take them over to that other place? How can we trust that place won’t be a hell-hole like the last one?
A: It’s bound to be better than this dismal place. At least there’s some water here.
B: Have we at least got some good knowledge of the terrain we’re crossing this time?
A: No, we used too much money last time and can’t afford another local to tell us about the route we’re going.
C: Are we sure that place we’re going is a good place to be this time, then?
A: Well, they think it might be better than this one, and they’re pretty confident it’s not as bad as that hot place.
B: But they said they were sure about the last place! It took us days to get the robot turned around last time, and they were complaining all the time. I don’t want to have to do that again, it’s horrid.
C: We’ll make the legs shorter.
B: What?
A: What are you on about?
C: Think about it – if we make the legs shorter we don’t take as long steps, and it’ll be much easier to turn around.
B: But, the long, gracefull legs are the point of this robot!
A: And, they’re so efficient!
C: I’ve been thinking – with shorter legs we get a closer look at were we’ll be in one step, and we can try to steer away from those bad places before we hit them. And, if they decide we need to go some other place in the middle of the trip it’ll be a much shorter turn-around time with them.
B: But short steps mean we’ll be having more times when one leg is trailing behind and has to be pulled back in under us.
C: Yeah, about that – I’m not sure the longer steps are more efficient. With the long legs we have to pull the trailing leg an awfully long distance before it starts adding some distance to our trip.
A: So, you’re saying long legs aren’t more efficient?
B: I’m not sure about this.
C: I think the long legs might be more efficient – if we know were we’re going to step and were we’re going. But I’m not sure we do, and I’m very sure we want to be able to turn around when they change their mind about where to go.
A: But it’ll take forever to build new legs! It took us weeks last time just to make schematics for the long ones.
B: Not necessarily – it took that long because they had to be very long. It’s actually much easier to make short ones, and now that we’ve made long ones once we know some of the things not to do.
C: It’ll take some time to retrofit the robot, for sure, but we’ll be able to do it in a little while. We might not go as fast as the long legs could’ve if we’d ever gotten them up to full speed – but with these we might actually get to top speed from time to time.
B: And we can steer the robot.
A: You know what we should do? We should make a window on the front – so we can see where we’re going!
B: That’s a great idea! Let’s do that!

Later

A: Who’d have thought this was were we’d end up?
B: Without your window we never would have noticed this place.
A: And, without your new, shorter legs we never would have been able to change direction mid-trip like that.
C: Hey, you guys – I’ve got this idea.
B: What is it now?
C: Ever heard of this thing called a wheel?

Categories: Uncategorized

Looking for Trouble…

November 5, 2012 Leave a comment

Refactoring code is the process of improving the quality of the code without altering the behaviour of the system, and it’s an important part of a programmer’s job. Most good developers will try to follow the boy-scout -principle when working with the code – attempting to leave the code in a better state than it was found. As an on-going mode this will help battle the entropy in large systems, but when working with large legacy-systems it can be helpful to take a more measured approach to refactoring.

Why metrics

In attempting to answer the question: “What parts of the system should we work to improve through refactoring?” we can use code-metrics to identify areas that would benefit from some tender lova and attention. This is a good first step, but it’s even nicer if we can identify not only which files are in a sorry state, but which are regular targets of meddling. Large and complex files that rarely change might be hard to work with, but if they’re never changed it indicates that they either work or are no longer in much use – and our time can therefore be better spent working on files that we are changing constantly. Files that are touched often by the developers would probably be files that are either parts that deal with quickly-changing business functionality (which is good) or they are a sign of poor architecture leading to tightly-coupled code that always have to be changed whenever something else is changed (this is very bad).

Inspiration

Michael Feathers’ Getting Empirical about Refactoring post and his talks on the subject available on the internet have served as an inspiration for this particular delve into a legacy system. We use git here at Komplett and as such we have the history of file-changes readily available. Corey Haines’ blogpost on Turbulence, measuring the turbulent nature of your code was the inspiration for the shell-scripts that extracted the churn metrics from the git log. I’ve slightly altered the regex and outputs, but the gist is much the same.

Code History

File-churn is a metric on the commit history, simply counting the number of times a particular file has been part of a commit. This will identify regularly-touched files, and conversely which files are no longer touched. This gives is the main-metric on file-history, and is easily extracted from git with the following command:

git log --all -M -C --name-only | grep -E '^(Projects)/.*\.[^//]+$' 
| sort | uniq -c | sort 
| awk 'BEGIN {print "file,count"} {print $2 $3 $4 $5 "," $1}' > file_churn.csv

The result is a file named “file_churn.csv” with each file matching the regex followed by the number of times it’s been part of a commit. This was a solution on windows with folders containing spaces, so I had to include many parameters in the final awk to make sure the full
filename was returned (it will come out without the spaces in this version).

I was also interested in how the file-churn aggregated on files, that is how many files had only been commited once, twice, etc. This is also easily extracted from the git log with a single line (the power of a real shell!)

git log --all -M -C --name-only | grep -E '^(Projects)/.*\.[^//]+$' 
| sort | uniq -c | sort | awk '{print $1}' | uniq -c | sort 
| awk 'BEGIN { print "churn_count,frequency"} { print $2 "," $1}' > churn_frequency.csv

This command results in a file “churn_frequency.csv” which lists the churn-count and the number of files with that churn-count. Try it yourself on your repository (but remember to change the regex)!

Importing the file churn into Calc resulted in a long list of filenames and their churn:

File Churn Sheet

Plotting the file-churn we easily see the expected hockey-stick graph, with a few files with massive numbers of commits, but most files having very few commits, and most having just one.

 

File Churn Chart

The aggregated view shows that by far most files have only 1 commit, and fewer and fewer have n number of commits as n increases:

File Churn Frequency Chart

 

Complexity

Many posts have been made on code-complexity and what it actually means. I think complexity-measures are interesting tools, but they rarely tell the whole story about a code-base. For this exercise I needed a simple numeric complexity on the files I’d extracted the churn for, so I used SourceMonitor to analyze the C# -code in this legacy solution. The nice things about this tool is that it is fast, easy to use, and it exports cleanly to a CSV -file that I can work with. It has some very nice Checkpoint-functionality and graphing that I didn’t use  although the user-interface is quite ugly). Regrettably this analysis only considered C# -code in .cs -files, and quite a lot of this system is written in XSLT, CSS and ASPX -files. These are therefore not part of the complexity-analysis, which is a shame – but they are part of the churn -metric.

Once I’d imported the CSV into Calc it looked like this:

Source Monitor Sheet

Massaging the Numbers

I imported the .csv -files with file-churn and the SourceMonitor analysis into LibreOffice Calc, my spreadsheet editor of choice. Everything described here is easily reproduced in Microsoft Excel or any other spreadsheet editor I imagine. First I washed the file-names so they were the same in both the sheets so they used the same conventions (removing any pre- or postfixes and making sure the slashed pointed the same way). I had two sheets – one with the file-names and their churn, the other with all the information from SourceMonitor.

Making the sheet to merge the two views of the code-base was the most time-consuming task by far, but I finally got it to work. If you want to do this yourself the formulas for column B, C and D are (for row 2):

File Name
=Source_Monitor.A2
I use the Source_Monitor sheet as the basis, as not all files in File_Churn have an entry in Source_Monitor.
Churn
=VLOOKUP(A2;File_Churn.$A$2:$D$12040;2;1)
You’d need to change the range here, and select the correct column (2 in my case)
Max Complexity
=Source_Monitor.L2
Might have to be changed, if Max Complexity is not in column L for you.
Average Complexity
=Source_Monitor.P2
Might have to be changed, if Average Complexity is not in column P for you.

All this resulted in a sheet like this:

Complexity Analysis Sheet

It should be emminently possible to automate this mostly manual analysis-step, and it’s probably something I will do the next time I want to look at these numbers – anything worth doing twice is worth automating. It is nice to be able to acces the raw numbers and do ad-hoc analysis on them, though.

What Did We Find?

Plotting both file-churn and complexity on the same graph gives us some obvious targets for refactoring. Generally we want low complexity and low churn – as this indicates a class with a single responsibility and low rate of change.

Files with a high complexity and a low churn rate are those large, difficult classes that either just work or are no longer in use. If they keep working and the business-needs do not indicate any changes we leave those alone if we can, our time is better spent elsewhere.

Classes with high churn and low complexity are typically either data-classes containing lots of data but little functionality or formerly troublesome classes that have now been dealt with (by reducing complexity, the accumulated churn will never go away though). These are potential targets for refactoring, and a closer look.

Finally we have the classes with high complexity and high churn – typically core functionality or tightly-coupled classes that need to change every time something else must change. Identifying these is valuable, and this is where we will focus our refactoring attention.

Michael Feathers (again) has a nice post called Data Mining your VCS on how to read these charts.

Churn Max Complexity Chart

 

It might look like we have a nice clumping down in the good quadrant here, but actually we don’t. Complexity should really never be above 15, and preferrably below 10. It’s hard to give a target for churn, but any file part of over 20 commits should probably be flagged as potentially problematic. I’d like to see a graph of the churn over time, to see if all the churn on a particular file happened a long tim in the past, making the file appear to be higher churn than it presently is.

Looking at the average, not the max complexity we see a somewhat similar pattern. This is partly obscured by an outlier with a very high average complexity. Closer inspection of that class revealed that it had only one method, and therefore a high average (as that was a complex method).

Churn Average Complexity Chart

Summary

Looking at both the churn of the files and the complexity of code can help us focus our attention on potential targets of refactoring. It gives a high-level view of what classes should be looked at in closer detail, and importantly which classes can be considered stable and thus lower-value when considering refactoring.

High-churn files should be looked at as potentially too tightly coupled, but it might be acceptable in some cases. High-churn/high-complexity files should be considered as prime targets for refactoring to increase future velocity and system confidence.

Low-churn/high-complexity files, while not optimal may be acceptable given limited resources to spend on cleaning the code.

When evaluating the complexity/churn it is important to dig deeper into the classes under scrutiny. There may be good reasons why the churn is high, and what seems like an acceptable complexity (relative to other classes in the system) may still be too high.

Usage Statistics for September 2012

October 16, 2012 Leave a comment

Welcome to the usage statistics for September 2012. We have not traditionally published statistics for our e-commerce sites, so this is a new experience for us.

We will focus on two of our brands here, as they nicely capture two subtly different markets.

Our first subject will be komplett.no, which is our largest brand and one of the largest e-commerce sites in Norway. The site is popular among nerds, geeks and technical people (my kind of people). This is also reflected in in the browser-distribution we see:

As you can see Chrome is the clear winner here with over 38% of the visits coming from it. Second and third place go to IE and Firefox, with about 20% and 17% each, and Safari is close behind with a respectable 15%. More on that later.

Let us look at the second subject, MPX, which is a smaller site in pure numbers, and with quite a different distribution of browsers:

As you can see the story is very different here, possibly on account of the less geeky customer-base. IE reigns with almost 30% of the visits, and Chrome comes in second with almost 27%. Firefox at 16%, almost matching its performance on Komplett, is beaten by Safari with an impressive 18% of the visits.

When developing for the web it’s important to know which browsers we have to support, but that is not the whole story. Different versions of different browsers can vary substantially in which features they support, so we’ll dive a bit deeper into which versions of the big browsers we’re seeing. By the way, these numbers are aggregated on both of our subject brands. First off, Internet Explorer:

Obviously IE7s days as a major browser seem numbered, with about 3%. About time, one might say for a browser from 2006. Its big-brother, IE8 from 2009, is still a large part of our reality with 34%. IE9 ,released with Windows 7 in 2011, has a solid lead with 63%, and IE10, which is still only available as a platform review for Windows 7 or on Windows 8, is barely noticeable with less than 1%.

Chrome is a different story, as it updates itself regularly by default. On account of this the major versions we see in our breakdown are more indicative of the releases of Chrome that were available this September. For all intents and purposes our development against the Chrome browser always considers the latest version of that browser. Version 22 was released the last week of September, and this is probably why v21 dominates with 75% of the logged visits.

Firefox also tries to update itself, but less aggressively so than Chrome. In this September breakdown we see that most users (59%) are already on v15.0.1, which was released in early September, while 17% are still on the August-release v15. There is even a sizeable group of stragglers hanging on to 14.0.1, which came out in the middle of July and quite a few on even older versions (9%). Apparently the browser-breakdown in Firefox-land is more eclectic than on Chrome, but still much more recent than on Internet Explorer.

When it comes to Safari we still see a breakdown on versions, but more interesting than that is the breakdown on operating system. Safari is used on both Mac and iOS, and gives an interesting view on how many Mac/iOS -users that browse our sites on their touch-devices contra their macs. I’ve coloured the iOS users black, the Mac users purple and the very few Windows -users are blue.

It is obvious that iOS dominates the Safari usage, with about 60% of the Safari users. Windows usage of Safari is under 1%. Let’s break down on which operating-systems our visitors use. The usage numbers here are quite similar on both brands, so we’ll look at the aggregated numbers:

It’s still a Windows-world out there, but significantly iOS is now on par with Mac. Android is still about half of the numbers iOS can deliver.

These were our numbers from a high-level view this September, and we’ll continue to publish numbers on the browsers and OS’es we see as time goes on.

I hope these numbers were interesting to you! This is just one look into the e-commerce audience in Norway, but these are some of the larger actors in that market.

Categories: Statistics Tags: , , , ,

Recommended: “Evolving E-commerce Checkout” by Luke Wroblewski

Luke Wroblewski (https://twitter.com/#!/lukew) just posted a must-read overview of the state of checkout forms in ecommerce. What works, what doesn’t, where are we going?

He points directly at a painful point in the Komplett checkout process:

Complicated shipping policies and calculations that have to be done by the customer, on the other hand, are likely to make conversion rates suffer.

And he shares some great and eye-opening statistics, confirming my gut feelings:

In 2004, 60% of all online retailers considered free shipping to be their most successful marketing tool. In 2011, a record 92.5% of online retailers are expected to offer free shipping promotions.

45% of all customers for a major Web retailer had multiple registrations and 75% of people who tried to login to an existing account by recovering their password never completed their purchase.

I highly recommend you read the entire article here:

http://www.lukew.com/ff/entry.asp?1579

Meanwhile, I’ll get back to those freight calculations… 😉

Categories: Design, Development Tags: ,

Winter is Coming, time for NDC videos!

As of 23:09 UT (http://en.wikipedia.org/wiki/Solstice) tonight, winter is officially coming here in the Northern Hemisphere. And what better to do with it than watch the best videos from this years NDC?

I’ve rounded up the NDC recommendations from the Komplett Web Team guys that attended this year, with links directly to the newly published videos.

The Highs

  • Aral BalkinA happy grain of sand (keynote).
    Should be mandatory for anyone working on anything to be used by humans. Brilliant!
    https://vimeo.com/43524962
  • Dan NorthDecisions, decsions.
    Challenge the established truths and best practises, don’t just do TDD or CI because someone said it, THINK for yourself.
    https://vimeo.com/43536417
  • Fred George – MicroService Architecture.
    Build lots of small cooperating services. Importance of versioning and “hard shell, soft inside”.
    NO VIDEO YET 😦 (http://ndcoslo.oktaset.com/t-4865)
  • Venkat SubramaniamRediscovering JavaScript
    Highly recommended!
    https://vimeo.com/43612882
  • Roy OsheroveThe Software Team Leader Manifesto
    Practical  insights into leading technical teams. Very good talk.
    https://vimeo.com/43612918
  • Venkat SubramaniamDesign Patterns for .NET Programmers
    Very good! Especially the “Execute Around Method” pattern.
    https://vimeo.com/43612851
  • Anders NoråsLearn how to build a modern browser applications using Backbone.js and Service Stack
    https://vimeo.com/43603509
  • Gojko Adzic – Reinventing software quality
    People have misunderstood BDD! Good talk.
    https://vimeo.com/43612920 
  • Udi DahanCommands, Queries, and Consistency
    AR is a “consistency boundary”. Worth seeing!
    https://vimeo.com/43612850
  • Billy HollisCreating User Experiences: Unlocking the Invisible Cage
    Use your imagination, don’t limit yourself. Take risks! Everyone should see this.
    (See all Billy Hollis videos, actually: http://ndcoslo.oktaset.com/p-360)
    https://vimeo.com/43624502
  • Dan NorthPatterns of Effective Delivery
    If your team is right and the system you are working on is right (chaotic), breaking all conventions is OK. See this to get a different perspective on software development than the current consenus.
    https://vimeo.com/43659070
  • Lea VerouCSS in the 4th dimension: Not your daddy’s CSS animations
    https://vimeo.com/43624462
  • Richard CampbellTen Web Performance Tuning Tricks in 60 Minutes
    Performance can be calculated, tuning IIS, perf tools in VS. Good!
    https://vimeo.com/43659068
  •  Damian EdwardsSignalR: Awesome in Real-Time with ASP.NET
    PURE MAGIC, MUST SEE!
    https://vimeo.com/43659069
  • Hadi HaririDevelopers: The Prima Donnas of the 21st Century
    We can’t act as prima donnas! Get a refreshing kick in the behind and see yourself in a new light.
    https://vimeo.com/43672296
  •  Jimmy NilssonAn architecture remake
    Jimmy shares real world experience from a large project in Statoil, how they went from “each developer in a silo” to truly domain-driven solution.
    https://vimeo.com/43676875
  • Billy HollisCreating UX with Story Boarding 
    Eye-opening, at least compared to our current design process.
    https://vimeo.com/43690684
  • Fred GeorgeProgrammer Anarchy
    Taking agile to the extreme! Free the developers to do what’s best for the business by themselves.
    https://vimeo.com/43690647
  • Christian JohansenPure JavaScript
    Highly advanced, using functional techniques in JavaScript.
    https://vimeo.com/43808808
  • Jimmy NilssonThe era of tiny
    https://vimeo.com/43808771
  • Venkat SubramaniamCaring about Code Quality
    https://vimeo.com/43808772
Categories: Coding, Design, Development, Process Tags:

Even the warehouse robots are geeks at Komplett…

Even the warehouse robots are geeks at Komplett...

L33t!

Categories: Fun Tags: , ,