Tuesday 31 July 2018

Tools and why we use them

Jumping on a new sexy tool/fad are we?

hugh laurie queue GIF

We constantly need to reevaluate the state of play, tooling, practices and languages are constantly changing around us. So assuming what we thought was good yesterday is still good can be a massive downfall, in all honesty, there's a pretty good chance that it wasn't ever good.

I can definitely see a trend where generally when we think something is good we overdo it, next thing you know it is a best practice and we do it everywhere, not just where it makes sense.

I remember switching to dependency injection and suddenly we decided it should be in every class, everything should be injected. Instead of deciding on a case by case basis to use it when we needed to test we just did it everywhere. It results in us not using our brains and just running on autopilot, perhaps we should always fall back to thinking about what things are good for.

I also saw an interesting talk lately on why we shouldn't always use microservices, the main point seemed to be that we should start off with a single application and change to microservices when they would benefit the project, though I completely agree with this I think it can be hard to change to microservices when you haven't done them before so perhaps mandating a practice for a specific application from the start is ok if one of the main goals is to learn the practice.

So my point really is to identify the benefit of a tool or practice and be careful in this identification. I would say architectural i.e. SOLID style reasoning is not good enough to justify a tooling. For example, if you say that dependency injection is for decoupling of dependencies please can we ask what the purpose of the decoupling is. So I would say dependency injection is for allowing us to inject stubs (not mocks!! :/) at test time, so how about we just use it where we want to inject test things. Also with some restructuring just having the consumer pass in the dependencies seems much easier than using a dependency container.

Maybe next time we are trying something we should also try a couple of manual alternatives and evaluate the differences to help us avoid practices that we can do without.

Monday 30 July 2018


Closures are great! When I first started coding in javascript I wondered how you manage to hide data in your objects... The answer closures!!

Let's take a normal class (oh thanks ES6!)

Now how would we do the same with a closure?

the factory function serves to store all the information inside of its closure when it is executed, so all passed in parameters are available to any functions on the created object. It also makes them inaccessible to anything on the outside. Elegant and simple, no extra syntax required.

After using this pattern for a while in some applications I noticed that we were often only returning a method with a single function attached to it, in these cases is it even necessary to have the object?

In this example, we just return a function that can do the work. This is simpler and it also helps to make it so that the abstraction only performs a single task.

Sunday 29 July 2018

Test First Pipeline

So we've gone from testing after the code has been merged to master to testing before it is committed and it's great. But I think we can go further, what I suggest is that we test before we develop, much like TDD, testers and the PO would design UI/Acceptance tests before it is coded. Then the tests are run before the code is merged, much like TDD this helps to guarantee the code is testable before it is created.

This would also require stronger working between developers, testers and business analysts, hopefully resulting in a good working environment focused on quality. I think this mentality would also be good for teams, always thinking about how something is going to be tested and the impact on quality, as many times in development the issues arise because of one role not sufficiently supporting the others.

The whole team can also decide on the scope that requires testing if there is to be any manual testing, what parts are likely to break etc.

Friday 27 July 2018

Solo Programming

Solo coding is a new practice where a developer sits down and puts their headphones in and works completely on their own. This allows for higher concentration and individual focus levels just not achievable in group working! It means the team can output more work! Every developer can maximise their output!

han solo GIF

Some developers dislike pair programming, it's not a bad sign on them. There are a lot of benefits to mobbing or pair programming. But I have days when it's just nice to put my headphones in and get on with stuff, maybe this is bad? I mean now I've gone and added less reviewed code that only I understand.

The first step is to make sure that the developers understand the benefits of shared work, it may seem slower at first than solo work but after you get some practice you will find that the work starts to speed up and you become way more efficient working together. Also, it is about the best way to share knowledge and reduce risk as a team. When a single person writes or works in an area they are the only person that knows that area, meaning that others will misunderstand and potentially cause issues when working there. Knowledge gaps can lead to failures and big problems with people having time off.

I would suggest that to start we begin by timeboxing the pairing activity, its a completely new way of working and it will take some time to get used to. Also, you must remember to strictly follow the rules, pair programming or mobbing is not just sitting around the computer working together, there is a strict format to follow. One person should be on the keyboard and the other(s) should be navigating, this means the person on the keyboard should follow instructions from the navigator(s) not just get on with the work and explain. The navigator(s) should be telling the driver what to do, they are in control at this point, follow this with extreme discipline! Also, the position should swap regularly, I recommend 5/10 mins then swap.

So give it a go trying to follow the rules properly, after a while you can adjust the numbers if you want, and start extending the time pairing. I still have mixed feelings about how much I enjoy group work, but I can clearly see it is a benefit, so trying to find a good balance of doing it regularly feels important, also I read that some companies are trying to vet out people who don't like pairing in the interview process nowadays...

Thursday 26 July 2018

Test Software Like AI

It seems to me that in the future the role of an application developer in many markets will revolve much less around describing the functionality of an application in code and much more in training AI systems. This splits the developer's role into two main areas, creating the training data and testing that the AI is performing tasks correctly. As often the solutions that the AI may come up with are hard to understand and reason about we may not even bother. If we can test it well enough we just test that it achieves all of its goals.

So why don't we start testing applications in this way now, it's just an extension of where we are naturally going with Test Driven Development. If we write the tests from a black box point of view that confirm the application or significant subsystems function as desired. If we test well enough then the tests become much more valuable than the application its self in many ways. The application can be completely rewritten and we can guarantee that it works in as many cases as possible.

It seems very likely that as AI allows us to develop systems much faster and with much less understanding of how the system actually functions that the role of dev-testers will become the main job in a development team. These engineers will still code, building tools to help them test the applications as I believe it is very hard to make a generic test framework for all applications that can function as well as specifically designed tooling that takes into account the domain.

The machines are coming, let's make sure they do what we want :)

terminator 2 thumbs up GIF

Wednesday 25 July 2018

Asking Questions

The goal is a really great book, not only do you get the lean approach to running a business but also the approach that the mentor character takes to teaching is one that seems really effective. He always answers questions with more questions forcing the main character to really think about what the answers to his own problems are.

elephant questions GIF

Upon further research this approach appears to be based upon the Socratic Method, by Socrates who was born in 470BC. So why have we adopted it so little? I guess it because it is very hard to retrain yourself to ask questions instead of give answers, maybe there is something satisfying in appearing smart because you know the answer. But there is surely more value in asking the correct questions, the learner will start to develop better analytical patterns to follow when they have questions, and this should free up time for the teacher as much less explanation is required.

But what questions should you ask as the teacher, one question suggested by a colleague of mine when a junior team member approaches them is to ask them what they have tried so far, this seems to be a good starting point. From here do we ask them a question to highlight the area they should be looking at or do we ask a question to get them to think about the different areas the problem could be in. I would suggest the latter as surely the goal of teaching is to make the student independent of the teacher.

unimpressed reading GIF by SpongeBob SquarePants

Time to read some plato I think!

Tuesday 24 July 2018

Review Apps

Testing is very important, it helps us to increase stability, but in an iteration we often build up a load of work into our master/dev branch and then have a stabilisation period, if the testers are going very well then we are kind of always slightly unstable anything just added is not tested and often not looked at by the PO/BA who requested it until a demo or it is released. If the test team is struggling to keep up then what we end up with is a wave of instability, often leading to iterations more focused on bug fixing.

Surely one of the points of agile is to be able to release at any point, having a couple of develop iterations then a stabilisation one is kind of like having one really big iteration. If I remember correctly then the idea of an iteration is that is releasable every iteration at a minimum.

So how to solve this? Review apps! We started this practice when we moved to docker using gitlab, basically docker allowed us to commission environments much faster so we could deploy the application much more easily. So every pull request that gets created can be tested by a tester and reviewed by the person who asked for the change prior to it being merged, this significantly helps to increase the stability of the app.

It can be achieved without docker in my thinking by just having an environment for each person of interest, for example a tester can have their own environment and just deploy the builds that happen automatically on every branch to their environment for testing, then they just add a tag saying that they approve this.

There can be some issues due to things like needing a lot of data in a system to test or perhaps database migrations. These can be stumbling blocks to getting this working, but there are ways round this through good seed data, or by building tools to quickly push test data into the system. It may seem like this could take a lot of time but in my opinion it is worth the effort.

Going forward we are going to look at adding automated smoke testing to the review apps as well, if every pull request is tested, smoke tested and reviewed by the person asking for it hopefully this should lead to us having an extremely stable and releasable master/dev branch as well as helping to guarantee we are building what was originally asked for.

Monday 23 July 2018

No More Iterations

Once you start doing continuous delivery what is the value in doing iterations? Surely one of the reasons we go for smaller iterations is to allow work to be reprioritised every week. So once we deliver when a story is complete can we not just allow work to be changed straight away.

I see it as the PO is in charge of the backlog and they can change work whenever they want, just not change the work someone is currently working on. You can still track work based on a fixed timescale if you want velocity, and do demos and retrospectives on a fixed timescale, we can merge planning and backlog grooming/refinement into a single session where we estimate and review work this can be fixed but can also just be arranged ah hoc by the PO when a session is required.

Hopefully this should give us greater flexibility to respond to change and go from plan to reality. Though there might be some difficulty hitting your tooling to work like this, but you can still set a time period in them I guess... It would kind of be nice to have a tool with a in progress area and after a story is completed once a certain amount of time has passed it goes into the completed log.

Sunday 22 July 2018

Pull Request Guide

Review for logic errors

Have they considered nulls in likely cases, does the way it uses and interacts with other parts of the system make sense? If they have used patterns do they make sense or are there any they should have used?

Review for architectural issues

Is the seperation of code done well, think about SOLID and coupling and cohesion. Often at this point its worth just asking them questions about their thoughts on it and what their approach was. Just make sure they have thought about it and their thinking is sound.

Review for over engineering

Have they added things that aren't necessary? We are all guilty of this, and they may not need to remove them as they have already been done, you might keep them if they don't cause too many problems. Just let them know for the future, PR's are as much for training as they are code review.

Review for readability

Readability is how easy it is to read, not if it conforms to your internal version of what is well styled, does the code have confusing knots of code? i.e. the code is too compressed and hard to comprehend? Not are the line breaks in a consistent manner, this one is hard to call, my main suggestion would be try to care less than you think is important about style as it adds very little and there is a difference between style and readability.

Saturday 21 July 2018

Story Dependency Chains

I'm sure I read somewhere that user stories are supposed to be independent of each other, we eventually took this and changed it to they should be independent of each other in an iteration so that they do not block other work in an iteration.

But surely embracing the dependency of stories gives us a much better estimate of how long it will take before a story is done, we essentially have to look at two metrics, the capacity of the team. i.e. velocity, when taking into account holiday etc how much work we can do in the time frame. And how long the longest dependant chain will take to be done. If there is 3 months of dependent chain time and only 2 months of work then we will still take 2 months.

Another useful measurement could be expansion, how much do our iterations normally expand due to bugs and other important last minute work? Its going to happen, there are always things that pop into the iteration, so rather than trying to pretend it won't happen, lets create a metric from it and we can try to reduce it as much as possible. We can easily estimate bugs or any added stories after they have been done.

So the time something will be shipped in is essentially the amount of effort of it and its dependant stories will take, plus allowance for our average expansion. This also relies on the stories in question being constantly active so either planning them into iterations correctly or not doing iterations at all.
These measurements can hopefully be adjusted though, hopefully by pairing/mobbing dependent chain work we can decrease the time it takes to get the work done.

Yeah didn't really gif this one...
cat banana GIF

Friday 20 July 2018


By making our micro services stateless we make them more testable and scaleable, a stateless service must take some input data and return an output, being as it doesn't store any state. This is easier to test as we can essentially write the data that we pass in and check what it passes back for each test case.

Where does my state go?
Most applications will likely still need to have quite a large amount of data that is stored in a storage mechanism (i.e. document database or sql), but if we can move these interactions to the start and end of our service chains then it should enable us to still have as much functionality as possible in easily testable services.

- Load data from store (this service is simple as all it does is load)
- send to service to do work (this service is easily tested)
- Save any data required (again should be simple)

Micro services enable us to work in new ways
  • When the application is split up smaller different services can take advantage of different programming languages/frameworks and storage mechanisms.
  • For larger projects teams can work on separate services and release individually of each other.
  • Because we are using multiple pieces to make up our application we can run them on different hardware and increase the amounts we run of individual services to scale out.

Wednesday 18 July 2018


Sometimes it can be difficult to determine the value that we are adding to a project, it is very easy to get our heads trapped at the abstraction level that we work at, for example as a developer I have previously invested lots of time in formatting code bases to new standards, at the time I believed this was adding lots of value to the project, because I was thinking about the project like a developer thinks about the project, I believe this is a stage many developers go through in their quest to be great at their job.

If we look at a project and try to determine value, we quickly get to sales as value. That is the only real value that can be delivered, if you are holding up delivery of a product you'd better be damn sure you have a good reason as you are costing the company money. This can be very hard to see with the layers between you writing code and the person collecting the cheque from the customer so disconnected. Unless we put effort into thinking about what is adding value it is very possible that we are all delivering lots of things that are of low or no value.

homer simpson smoking GIF

It can also be hard when trying to shift to this mindset to find where long term value delivery fit it, it's already hard to measure the value of adding a feature to a piece of software, how do we even begin to comprehend the value in spending a day writing documentation for other developers, or helping to train colleagues.

There is loads of great advice out there on development techniques, mobbing, pairing, TDD, XP and so on. But how can we measure the difference that they make to our organisation? What metrics should we use? The only thing I can think is to use story points as a starting point, velocity could be a good metric, but story points are hard and not directly tied to value... perhaps we should measure the value of a story/bug at the same time that we measure the effort required?

busy cat GIF

Tuesday 17 July 2018


Lately I've been listening to "The Goal", by Eliyahu M. Goldratt. Which is a great novel about managing a business. In summary bottlenecks control how much output a business has, and output generates money.

I've been playing with a couple of theories about how this could relate to software development, firstly the bottlenecks could be roles within the team. For example if you have a lot of work waiting to be tested then the testing could be a bottleneck. I've heard about using WIP limits to manage this, for example you might say only 2 pieces of work can be in the test column of your kanban board at a time. If this happens then people need to help the tester to move work out of the column. This may seem counter productive as developers would be faster at developing, but if there is a limit on the team producing then it doesn't matter how much work in progress stories there are, they aren't done.

Often work in progress, or inventory in factories is seen as having value, but in the book and many other things that I have read they describe this work as being bad rather than good. You can begin to see how this applies in development. Work in progress must have latest merged into it, it can include increased technical debt and it can get in the way of the team moving through other work that is more important.

One of the other ideas I had is that maybe the bottlenecks are individual stories themselves, often I come across certain stories that take longer with individual team members than they should, maybe these stories are problematic for the team member, due to lack of domain/technical knowledge or maybe they have just hit a problem that is causing them to procrastinate somewhat. How about we track the average time for a story (or story with each member), and if the story goes over the average, identify it as a candidate for pairing/mobbing. This could help us to patch weak spots in the development process.

goal kicking GIF

Monday 16 July 2018


Mobbing is when a whole team sit around one computer similar to pair programming and all code together! This practice like pair programming has many advantages and even though it seems like it would slow you down having everyone at the same computer it can actually speed you up as there is a lot more knowledge sharing and focus on the task at hand.

Mobbing is not five people watching and one person coding, like pairing the people who aren't at the keyboard (driving) should be navigating (making the decisions and reviewing the work being done).

telling the sopranos GIF

Mobbing can be useful for complex tasks and knowledge sharing, making sure that all the people who are required to solve a problem are sitting around the computer. This should make for no wait time when things people have questions.

Mobbing can also be used for learning, grouping together to pick up a new technology or practice. Think about it one the whole team will have a shared understanding rather than one person being a single source of knowledge and failure.

Best of all mobbing makes sure that the work being done is the best the team can do, with every members skills being used and making sure that and problems members have should be spotted by the other members.
detroit lions dancing GIF by NFL

Sunday 15 July 2018

Inspiring Creativity

Hey Moron!

People are awesome! Even you!

Why do you think that google offer 20% of work time to work on whatever you want? so they can steal all the awesome ideas? well maybe, but that's beside the point! It also really helps to inspire people's creativity. While it's all good to hire and train your people to be the most super awesome workers ever if they aren't inspired to do work through creativity, pride and self-motivation you're probably wasting a lot of output. Imagine if you could make all your workers increase output by 5%! in a 100 man company that's like hiring 5 more people! You do the math!

While I don't think there is a recipe for making this happen there is a guaranteed way to stop it happening! So let's start from there! Here's what not to do!

Be over controlling! (Micromanaging)

Have no trust in people to do the right thing!

Discourage open discussion!

So next time your colleague suggests an idea, encourage them. Yeah I know they're a moron and it's a terrible idea! But play it through with them and they will either realise it or it might become more than you could imagine it would be =]

Read a Freaking Book
Peopleware - Tom DeMarco

Work Rules - Lazlo Bock

Saturday 14 July 2018


Some projects seem to encourage the use of mocks, why not? they're really powerful right!! "I can do all kinds of stuff with them!" Personally I find mocks to be at the extreme end of the scale, while they are a really powerful and useful tool, I rarely need them. And why add complexity when it's not required. Even the best setup interfaces are pretty complex.

Most of the time you can get around mocks by restructuring your code to have the execution of the dependencies fired outside of the place the testable logic occurs. The easiest illustration of this is database calls, Load the data and Save the data externally to any manipulation or calculation logic you are doing on this. This means the construct that accepts the data and returns some other data can be tested by passing variables and checking what is returned.

Perhaps the logic needs to call another service to let it know when something happens, realistically I can't say that not using mocks will work in every case. But you should defiantly ask yourself if you can move the calling code into another place that isn't as testable. Perhaps the service could return the message to send to the other service. Do we really get a lot of value out of testing that we passed what we guess is the right thing to the network abstraction. Would this not be better to test in an integration test where we test a call to the real service.

Sometimes the things get to a point when we need to use mocks, but I think people don't often spend enough time trying to avoid them and make simple input output tests.

Another example could be I have a method that add's data to a database, and a method that gets data from a database. I mean I could mock the DB and interaction test that the right looking things were passed. But I'm really testing thats it's implemented how I implemented it. To me personally checking that the code I wrote works in the way I wrote it adds no where near as much value as testing that it does what it's supposed to do, no matter how it does it. In this example we could easily call the add method, then the get method and check that the correct information is returned. We can set the database to in memory mode, or use a memory based abstraction round the database. If your testing is really stringent then you should probably at least have an integration test on top of this that checks the database functionality works as expected as well.

Friday 13 July 2018

Reasoning with SOLID Principles: Dependency Inversion

Dependency Inversion

The dependency inversion principle is the final tool from our SOLID toolbox, basically depend on interfaces rather than other classes. This way the implementation can be switched out as our objects are less coupled together. This little pattern can be very useful when you just plain want to switch the way a dependency is referenced for more menial reasons. Like perhaps you want to reference a class that is in a higher package than you, why not just have the low package define an interface and the high one implement it.

Dependency inversion is also pretty related to dependency injection, they are not the same thing or done for exactly the same reason, but dependency injection makes use of the dependency inversion principle to allow to you specify a set of tools you require, allowing the dependency system to select at run time the things that fill your needs. This is very useful for testing.

My main warning about this principle is very similar to the open closed principle, try to avoid over use. You don't need to start with this pattern, if a dependency needs to be inverted it will become clear over time. Refactor to patterns rather than starting with them, and you often end up with simpler code.

Thursday 12 July 2018

Reasoning with SOLID Principles: Interface Segregation

Interface Segregation

The interface segregation principle is pretty sound, make interfaces small so that a client or user of the interface only needs to implement or use the methods it cares about. This is pretty good thinking, generally small cohesive things are better.

But in the long run does this not just lead to being easier with duck typing? I guess maybe that is then an extreme implementation of the interface all together. Yeah we're going to lose the niceness of knowing that if A exists on the interface so does B but hey there's a compromise to everything. But I guess in that case you don't need to separate interfaces up if one client only wants to implement a part.

Yeah i went a bit off track with this one, I'm on a break at a conference and I really don't disagree much with the principle :)

Wednesday 11 July 2018

Reasoning with SOLID Principles: Liskov Substitution

Liskov Substitution

Liskov is perhaps the most law like SOLID principle, every sub class should be useable as the base. I mean I can't find an example where I don't think this makes sense, probably more importantly now I wonder if I should be using inheritance at all?

The complex taxonomies of classes that made this relevant now seem distant in the past, and while inheritance still has much usefulness I feel that there was defiantly a time it was overused in the past. Better to compose objects of each other rather than to make them each other, looser coupling is implied in that relationship. i.e. a car has 4 wheels rather than is a four wheeled vehicle object.

This may seem a trivial difference but really comes into its own when single inheritance is enforced, composition is very take and choose what you want whereas inheritance implies a much deeper relationship, one cannot be a four wheeled vehicle and a two doored vehicle when it comes to base classes. whereas an object can have four wheels and two doors.

So Liskov good! But be careful with those large complex inheritence designs as inheritence is a extreemly tight form of coupling!

Tuesday 10 July 2018

Reasoning with SOLID Principles: Open Closed

Open Closed Principle

This principle is really great, basically setup your code so that new features are extensions to the existing code and not changes. Although I find it to be really useful I would recommend exercising caution when using it. It is very easy to overuse this principle and end up with a lot of unnecessary code ready to handle perceived future changes.

Identify the areas that are susceptible to change and the refactor them to the open closed principle. As a rule of thumb I would say the first time just write the code in the simplest way possible, then when a new feature is required add an if statement to incorporate the change. By the third or fourth time you should really be thinking about changing the code to accommodate future changes.

There are also times when it can be applied straight away, it seems to me a lot of good architecture is identifying which parts of the system are likely to change and planning this into your design. This does not mean you need to spend loads of time identifying which parts of the system these will be before hand and design them in. Refactor your design as they naturally appear in the course of development.

So this principle is yet another great thinking tool that should be applied with balance but when in doubt Keep It Simple Stupid.

Monday 9 July 2018

Reasoning with SOLID Principles: Single Responsibility

The SOLID Principles are a great tool to help you learn object oriented principles, but after trying to apply them for quite some time I think there are definite boundaries to when and where they should be applied.

I'll break this into parts! here is part 1!

Single Responsibility Principle

Single responsibility is a great tool for quickly noticing when you've got too much stuff in your stuff, if its obvious that there are two very different responsibilities in an object it can be worth separating them to make things clearer. The issues you hit with this really as you have to be careful about thinking what abstraction level you are reasoning about when considering this.

Say I have an Order object, my order object contains things that are order related, perhaps and ID for the order and some methods to update the order and send the order. But if someone ended up adding a method that draws an alert to the UI, this perhaps would stand out.

I wouldn't normally name like this, just trying to make it clear that all the methods apart from the draw_ui_alert clearly relate to the order.

Perhaps I add a method to print the order, to start this method is small and just outputs the order id, this makes sense within the SRP right? The responsibility of the object is to manage the order, the responsibility of the print function is to print the order. We can see that if we were to make the print function also add a new item to the order that this could be a violation of the principle. But what about when the print method grows very large because it also includes a lot of code that describes how printing works, not necessarily related to the printing of the order.

Is there two responsibilities there sounds like it, but when that code was smaller it didn't feel like there was, so surely the principle has to be used in conjunction with balancing the size of things. So now we look at our print method the first line initialises a printer object... well thats a single responsibility depending on the abstraction level we are thinking about right, being responsible for adding a and b together could be a single responsibility. I'm not hating on the principle, just that it sounds like a simple rule but in practice is much more a great tool for reasoning about if something does too much, or how to split it when it is too large.

I guess for quite a few of the principles that is the key, knowing when to apply them and how. But also just using them as thought tools :D

Sunday 8 July 2018

Git Bisect Scripting!

So sometimes you want to find out what commit a bug was added, even if you have tried git bisect its a pretty manual thing doesn't seem too cool. Let's just go over the basics for anyone who hasn't used it before.

Bisect basically allows you to do a binary search over a git history to find when a bug was introduced. To start you will need a known bad commit (normally the latest, or a released version) and good commit, probably the commit the feature was added in when it was working all well and good :)

First enter bisect mode
git bisect start [BAD] [GOOD]

you can use all the usual allowed stuff (branch, commit#, HEAD~2, whatever)

Then manually check if the commit is good or bad and report!
git bisect good
git bisect bad 

Also if the project does not build or something you can skip one!
git bisect skip

Then when it tells you what you are looking for you can exit with:
git bisect reset


Right lets automate it! You can run any command that you can run on the shell and have it return the state to bisect.

Exit codes
we just exit the script with the following numbers to tell bisect what state the commit is in

  • GOOD - 0
  • BAD - 1
  • SKIP - 125

I'll make a little noddy function to test
module.exports = function add (a, b) {
  return a + b

I then add a few commits and break it on the way,

for this example I put my testing script outside the main git repo so that the checkouts won't have a problem.

try {
  const add = require('../bisect_script/')

  const result = add(2, 2)

  if (result == 4) {
  } else {
} catch (err) {

Then we can run the script against bisect with
git bisect start HEAD HEAD~5
git bisect run node ../bisect_test_Script/test.js


cce8f28154071789a33a8b101cd11dc6bae2cf33 is the first bad commit
commit cce8f28154071789a33a8b101cd11dc6bae2cf33
Author: Robert Gill <envman@gmail.com>
Date:   Sun Jul 8 16:48:30 2018 +0100

    more logging: broke it

:100644 100644 e1dbfd4a103f424f065510204f9ae5ff80db1625 5c87270f0517cf407c4d486796899be2d87bc124 M index.js
bisect run success
Just make sure you do git bisect reset after to get back to the starting point.

Code Example (2 repos subject and test script)

You may need to update paths in bisect run and the test script to make it work depending on how/where you clone.

To Git! From TFS!

So I found this blog from a few years ago I never posted it and rolled it in glitter, maybe it's useful... also I'm probably going to write some more posts on git soon, so why not give some history...

This is my attempt to help out people that are thinking about migrating to git from TFS, I believe git has many advantages over TFS but have seen many people struggle and complain? when using it for the first couple of weeks. To put in context I have used git for 6+ years but was helping the rest of the company I am working for move over to git.

Git for TFS Users

If you just use your source control for checking in and getting latest git is probably going to add some confusion to your workflow. Visual Studio tries to hide the extra steps that are going on under the covers, which is fine when things are going good but will probably lead to you making mistakes because you don't fully understand what is happening when you are performing source control operations.

You're probably going to hate git

Git is not perfect, its complicated and has a horrible learning curve. Here's a site that might help. You might think that you can just switch to git and off you go (how difficult can it be eh?). Try it... go on...

Basic Cycle Differences

  • Get Latest
  • Check in
  • Fetch
  • Merge
  • commit
  • push


Git is a distributed version control system, so it does not necessarily have to have a central repository but i can handle this setup and is probably most used in this way. But it does allow for connecting to multiple "remotes", this means you could directly push between users or setup more complicated systems.

Source Control Usage

When you first start using source control the purpose is quite simple, let me share my code with the people I am working with and track the changes in a way that I can understand what happened when two people are making changes in the same place. So you need a few basic operations:
  • Push my code in
  • Get other peoples code
  • Merge when bad times happen =[
Nowdays I find myself wanting quite alot more than I used to
  • Quick and easy branching
  • Ability to merge locally?
  • Private areas for subsets of team to work on same code
  • Have my source control help me to find where defects where introduced
  • Ways to track previous versions so that they can be patched and managed
When comparing git to TFS it seems like I can do all of this in TFS, it just doesn't seem to be the way it wants me to use it, it's hard to explain but creating branches seems like a light weight trivial task with git, I create them all the time and throw them away, with TFS they seem big and clunky...

Source Control as a Tool

There is so much more that you can do with source control than just check in and checkout files.
  • Marking previous versions so that you can bugfix
  • Working with subsets of your team on features without affecting the whole team
  • Managing check ins via code reviews
  • Search through history to find out where errors came from

Tips for changing to git

  • Make an effort to learn the differences and what is going on under the covers
  • Have someone on standby to fix things when they go wrong
  • Practice with a test repository before moving over


When you checkout in git the contents of the working directory are changed to whatever commit you are checking out. You maybe are checking out the v0.1 branch. Once this command is run the contents of the repository will be whatever commit the v0.1 branch is pointing to.


Branching is where git really gets in to its own, its the flexibility and ease of its branching that allows for all the cool workflows and ... that really make git so powerful.

Branching in git is different from TFS, in TFS you branch a folder and essentially have two versions of that folder that have similar contents. In git you branch the working directory, so you can only see one branch at a time (unless you download the repo twice).


Visual Studio - now has pretty decent git support 
Source Tree
Git Kraken
Command Line - my personal preference, so much tooling now has good cli interface and requires me to hit the terminal, docker, k8, git, node

References / Further Reading


Friday 6 July 2018

Git Abuse: Making a git repository without git


I'm doing this on OSX terminal, so will probably work on OSX/Linux, less likely on command prompt. Use CMDER to be happy...

First we create an empty folder to house our working directory
mkdir gitdemo
cd gitdemo
Then create the .git folder that stores our copy of the repository
mkdir .git
cd .git

Now we start creating some git files, the HEAD file contains a string that says what the HEAD currently points to, HEAD is a pointer to the currently checked out location.
echo "ref: refs/heads/master" > HEAD

Then we create the objects folder
mkdir objects

Then we create the refs folder with the heads folder inside of it
mkdir refs
cd refs
mkdir heads

You can now use this as a working git repository!
Make sure you change back to the gitdemo directory
cd ../..
echo "console.log('hello')" > index.js
git add -A
git commit -m "initial commit"

If you've done everything correctly this should work!

Now if you look inside of the .git folder you can see that git has started adding more things to the objects folder, as well it has created ./.git/refs/heads/master

cat ./.git/refs/heads/master

This should output a commit hash, so we see that this basically says master is on this commit.

I wonder if we can create a branch by just copying this file...
cat ./.git/refs/heads/master ./.git/refs/heads/mybranch
git branch

Now displays
* master

But then could we check this out by changing the HEAD file, I mean it won't update the working directory but as they are both pointing at the same commit this should be fine?

echo "ref: refs/heads/mybranch" > HEAD
git branch

Now displays
* mybranch

Hopefully this starts to give you an understanding of how gits refs/branches and HEAD work on disk.

Thursday 5 July 2018

Working with Git Submodules

Git submodules enable you to have repositories inside of each other, this can be a useful mechanism to share code between projects.

Adding a submodule to a project
git submodule init
git submodule add [url] (you can use git submodule init [url] [foldername] to specify folder)

Where [url] is the location of the git repository.

git add -A
git commit -m "Add submodules"
(then push if you want to share!)

This will add a reference to a specific commit to the project.

Downloading submodules in a repository that already has them setup
Clone the repository as normal
git clone [url]

Then init/update the submodules
git submodule init
git submodule update

If you haven't already cloned you can do
git clone --recursive-submodules [url]

updating the submodule
cd into the modules folder
cd mysubmodule

Then use normal git operations, i.e. pull
git pull

then cd back to the main repository and commit the update
git add -A
git commit -m "updated submodule"

Reset submodule to the commit stored in the parent
git submodule update

This will checkout the specific commit that is stored in the parent repository.

Changes in the submodule
When in the submodule folder you can make changes to the modules repository using normal git commands. Just make sure you push then add/commit in the parent repository so that everyone else gets the changes when they do submodule update.

Automatically updating submodules on git pull
You can git to automatically update submodules by adding the following setting to gits config (you could also just set this per repository)

git config submodule.recurse true

Tuesday 3 July 2018

Javascript packages in JSCore on iOS

unfortunately most javascript packages are either designed to work with a browser or in node.js using the CommonJS module loader. When working in JavascriptCore you aren't really using either of these.

Loading the Scripts
Download the scripts from NPM or via a CDN/Github. Ideally you want it in a single file as this is going to be much easier for you to load.

Browser packages
for a browser package it will normally check to see if it is running on a browser by seeing if the window object exists, and will often add its output to the window object. Duplicating this can be pretty simple, just evaluate a script that says var window = {} before running the packages script.

CommonJS/Node packages
Node packages use the CommonJS module pattern, they often check to see if they are running in a platform that supports this by checking for the existence of module and module.exports objects. You should be able to replicate this by adding var module = {}; var module.exports = exports = {};

You will run into further problems trying to import multiple files that use the commonJS module system, as this system uses a function called require() to load packages from disk.

CommonJS in Javascript Core
In theory you could implement a version of common JS within JSCore by adding a require method that loads and caches the contents of separate files.

Monday 2 July 2018


Blockchain is the technology that backs bitcoin and other crypto currencies allowing for a peer to peer financial system. ok so it works for money but what else would you like to be able to make a peer 2 peer system for? blockchain technology can be used and adapted to help solve many problems.


In a centralised system the server is the trusted third party, so when you send money to another person the server stores the record of how much money you have. We trust that server to let you know how much you have and make sure others don't abuse the system. So we don't need to trust every one that uses the system just the server.

In a peer to peer system we know that any node we connect to could be untrustworthy, but somehow we have to validate what is true or not within a system where multiple nodes could be trying to abuse the system.


The solution has several parts

  • Group updates together into blocks, with each block having a single parent.
  • Make every node validate every update (block) based on the systems rules.
  • Make it random (ish) which node creates the next block.
  • Make it difficult to produce a block so that people cannot rewrite history.

Each set of updates or transactions is grouped together into a block, with each block referencing the block that came before it, This way with the latest block you can trace back the history of any updates.

Making it difficult
To make it difficult to create a block we introduce a problem that will need to be solved before a block can be produced, we make the problem hard to solve and easy to verify, like opening a combination lock without knowing the key, spinning each dial to all possible positions until the lock opens takes much time, but is easy for someone else to confirm that you solved it correctly. To do this we take the data of the block and generate a SHA-1 Hash of it, we then see if it matches some criteria, I.E. it starts with 0000, if it doesn't we increment a number inside of the block and try again, This is essentially what bitcoin miners are doing, solving millions hash calculations.

Making it Random (ish)
Because of the nature of hash calculations some people's potential blocks will solve much faster than other peoples, this means that even if you control much of the processing power of the network, you should still get beat some times.

Handling network Lag
Sometimes different parts of the network can create different blocks at the same time, this can cause there to be 2 valid but differing histories, we consolidate these histories by allowing nodes creating new blocks to work on creating new blocks on top of the block they saw first, after time it is unlikely that the differing blocks will have more blocks piled on at exactly the same time. So we can solve the difference by saying the truth is the block with the longest history, or if they histories are the same length assume it to be the one you saw first temporarily.

Basic Networking
Each nodes connects to other nodes via TCP, and broadcasts anything it would like to add to the network (Completed Blocks, transactions), it also validates and re broadcasts any data that it receives. Thus most nodes will know about a transaction before it is validated. Transactions are often considered valid after a certain number of conformations (block length after transaction added). Because the longer the chain after it the more unlikely a duplicate block will overtake it as the leader.