fightcodegame.com is a game where you use your javascript skills to code a robot that goes into an arena against other robot.

It’s a project for the Github Game-Off 2012 contest. It’s a different kind of hacking contest, in that you have a month instead of a day, two or a week.

It might seem like a minor detail, but it makes all the difference. At least it did for our team. We worked it like we would any other iterative project we might work with at our daily jobs.

The team is composed by myself, Cezar Sá, Guilherme Souza, Fábio Costa and Rafael Carício as devs. The incredible designer Marcela Abbade did the layout for our product.

A small team set for building a product that we didn’t actually believe could be built in such a small timeframe.

TL;DR

fightcodegame.com was developed iteratively and uses Javascript for both client and server.

We run node.js at it’s serverside components and the code for the engine is EXACTLY the same running in the browser or the server.

This was the first time I actually benefitted from running the same Javascript in the server and the client, so I found it interesting enough to share. The code can be found at https://github.com/timehome/game-off-2012.

How did you come up with the idea for it?

As all such ideas begin: in a table with friends. Someone said: “What if we did an arena where robots fought each other and you did the robot coding with javascript?”.

We all laughed like “yeah like that’s possible”, but the idea started growing on us. A couple months later and here we are.

Some Stats

Before I start I’d like to share some stats with you, because I’m actually very impressed with them.

At the time of this post, we have in the database more than 1500 fights.

Since we launched the website at 4pm of the 28th of November (date of the first fight in the database), that means roughly 14 fights an hour. It’s A LOT more than we expected.

We also have more than 200 people registered and above 150 robots created. Amazing, right?

What REALLY got me, though, is our stats in Google Analytics:

  • About 1000 visits in just a couple days;
  • About 500 unique visitors;
  • More than 11 thousand page views;
  • An incredible average page duration of 11 minutes;
  • Bounce rate of 30%, which means that most people entering the website navigate through it.

I gotta say that again – 11 thousand page views. Even if that counts our own page views, it’s still amazing in a couple days.

You said iterative?

Yep, we worked VERY iteratively. First, we didn’t have anything. Then we had the engine. And then we had the animation.

We spent the whole month of the project iterating and improving over the previous iteration.

And you know what? Iterative development CAN be fun. People spoil all the fun with metrics, meetings and other useless bureaucracy.

What about the game?

Well, the game is pretty simple. Build you robot, we’ll run your code against the other person’s code and see who wins. The easiest way is to go there and play a little bit.

What’s really interesting about it is how we calculate things and how we render the fight in the browser.

The first peculiarity about our game is that it’s engine needs to be 100% deterministic, since we can’t have different results in the server and the browser.

What that means is that we DO NOT store the rounds of the fight. We calculate them in the server based solely in the code of both robots and some initial info (like robots positions).

The engine is 100% javascript. That helps us A LOT in running it in both the server and the client. We used Coffeescript for it and were very positively surprised. If you are wondering what the engine looks like, go take a look at it.

The Engine

fightcodegame.com‘s engine is a turn-based engine. It might seem realtime in the browser, but it’s not. I’ll cover how we do it’s animation below.

The engine loops on the robots codes, logging everything until one of the following happen:

  • Only one main robot (not a clone) is left alive;
  • The fight times out (after a fixed number of rounds), in which case it’s a draw.

The really interesting part of the engine is it’s code: collision detection, line of sight and a lot of interesting stuff there. The end result is an array of rounds that will get passed to the interface to be animated.

The code for the engine can be found at the engine.coffee file.

The Animation

The animation part was tricky for myself, since I didn’t have much experience with RequestAnimationFrame. I couldn’t be more pleased. It was so easy making animation that actually renders the same across different computers.

Other than animating, we calculate the fight (run the engine) in WebWorkers so that we can maintain a responsive UI even when hardcore CPU cycles are being demanded. If you haven’t tried them, I seriously advise you to play with this amazing browser technology.

The fight goes like this:

  1. WebWorkers calculate the entire fight;
  2. We animate the resulting rounds with RequestAnimationFrame.

The RequestAnimationFrame method takes a callback function as argument. The function should receive a timestamp that represents the time at which the animation you requested is scheduled to occur.

What this means is that you can easily find out how many milliseconds of animation you have to run in this “step”.

In our case, we just subtracted the timestamp from the last processed timestamp to find out how many rounds should be rendered by the browser.

The code for the main portion of the animation is in the  animation.coffee file.

What’s next?

I have an amazing robot and I want more! What’s next?

Glad you asked. I’m not sure what’s next. If you feel that there’s something missing, please feel free to create an issue at our github repository and we’ll consider it for our next release.

Right now we can’t do anything. We have to wait until the judges try the app for themselves.

Hope you guys have as much fun as we did.

My new map/reduce engine project, got a lot of attention last week and before that in twitter, facebook and even hackernews.

So I decided to write a sample project demoing the usage of r³.

The problem

I had to find an interesting, yet simple problem to show in this demo. Since I am a huge fan of github, I decided that I would show each committer’s percentage of commits in a given repository.

GitHub has a VERY nice API that you can use  to retrieve a myriad of information on your own repositories or on other people’s repositories (provided they are public).

You just have to access  https://api.github.com/repos/mirrors/linux/commits?per_page=100&top=master to get the first 100 commits in the linux kernel repository. The resulting document comes with a link header that specifies where the next 100 commits can be found.

The Input Stream

Cool! So my map/reduce operation should operate on top of all commits for a given project. That means that in my input stream I just need to capture all those commits and return them.

I just built a simple crawler that keeps looking for the next page of commits until it can’t find one.

To save myself some time and bandwidth it also stores those commits in a temp folder as means of caching them.

The code:

 1 #!/usr/bin/python
 2 # -*- coding: utf-8 -*-
 3 
 4 from os.path import exists, join, dirname
 5 from urlparse import urlparse
 6 import os
 7 import sys
 8 import urllib2
 9 
10 from ujson import loads
11 
12 CACHE_PATH = '/tmp/r3-gh-cache'
13 
14 class Stream:
15     job_type = 'percentage'
16     group_size = 10
17 
18     def process(self, app, arguments):
19         if not exists(CACHE_PATH):
20             os.makedirs(CACHE_PATH)
21         user = arguments['user'][0]
22         repo = arguments['repo'][0]
23 
24         return get_repo_commits(user, repo)
25 
26 def get_repo_commits(user, repo):
27     next_url = 'https://api.github.com/repos/%s/%s/commits?per_page=100' % (user, repo)
28     commits = []
29     index = 0
30 
31     while next_url:
32         index += 1
33         content, next_url = get_url_content(next_url, index)
34         json = loads(content)
35         for item in json:
36             commits.append(item)
37 
38     return commits
39 
40 def get_url_content(url, index):
41     parts = urlparse(url)
42 
43     url_path = join(parts.path.lstrip('/'), parts.query.replace('&', '/').replace('=','_'))
44     cache_path = join(CACHE_PATH, url_path, 'contents.json')
45     next_path = join(CACHE_PATH, url_path, 'next.json')
46 
47     if exists(cache_path) and exists(next_path):
48         print "%d - %s found in cache!" % (index, url)
49         with open(cache_path) as cache_file:
50             with open(next_path) as next_file:
51                 return cache_file.read(), next_file.read()
52 
53     print "%d - getting %s..." % (index, url)
54     req = urllib2.Request(url)
55     response = urllib2.urlopen(req)
56 
57     contents = response.read()
58     print "%d - storing in cache" % index
59 
60     if not exists(dirname(cache_path)):
61         os.makedirs(dirname(cache_path))
62 
63     with open(cache_path, 'w') as cache_file:
64         cache_file.write(contents)
65 
66     next_url = None
67     if 'link' in response.headers:
68         link = response.headers['link']
69         if 'next' in link:
70             next_url = link.split(',')[0].split(';')[0][1:-1]
71 
72     if next_url is not None:
73         with open(next_path, 'w') as next_file:
74             next_file.write(next_url)
75 
76     return contents, next_url
77 

This stream is very simple. All it does is get all commits for a given project (using the arguments user and repo) and return it as a stream for r³.

The mapper

Now that we have all the commits for the given project it can’t get any simpler. We’ll just separate the commits per commiter like this:

 1 #!/usr/bin/python
 2 # -*- coding: utf-8 -*-
 3 
 4 
 5 from r3.worker.mapper import Mapper
 6 
 7 class CommitsPercentageMapper(Mapper):
 8     job_type = 'percentage'
 9 
10     def map(self, commits):
11         return list(self.split_commits(commits))
12 
13     def split_commits(self, commits):
14         for commit in commits:
15             commit = commit['commit']
16             yield commit['author']['name'], 1

That gets the number of commits per user in the project.

All that’s left is to reduce this to a coherent value.

The reducer

The reducer just iterates through all committers and assigns percentages:

 1 #!/usr/bin/python
 2 # -*- coding: utf-8 -*-
 3 
 4 from collections import defaultdict
 5 
 6 class Reducer:
 7     job_type = 'percentage'
 8 
 9     def reduce(self, app, items):
10         commits_per_user = defaultdict(int)
11         total_commits = 0
12 
13         for commit in items:
14             for user_data in commit:
15                 login = user_data[0]
16                 frequency = user_data[1]
17                 commits_per_user[login] += frequency
18                 total_commits += frequency
19 
20         percentages = {}
21         for login, frequency in commits_per_user.iteritems():
22             percentages[login] = round(float(frequency) / float(total_commits) * 100, 2)
23 
24         ordered_percentages = sorted(percentages.iteritems(), key=lambda item: -1 * item[1])
25         return {
26             'total_commits': total_commits,
27             'commit_percentages': [{ 'user': item[0], 'percentage': item[1], 'commits': commits_per_user[item[0]] } for item in ordered_percentages]
28         }

Getting it all together

Now it’s time to get all the things we done together and start looking at some famous repositories.

In order to make this easier, I setup a repository in github that has everything in place.

Just clone it, type make run and the server will be running.

WARNING: The make run command will install some python packages. If you don’t want them to be installed system-wide, create a virtualenv before running the command.

Interesting Trivia

I ran r3-gh against some famous repositories and got some interesting information. Be advised that the number of commits does not reflect code committed and/or effort spent, since some people commit more often than others. This is meant simply as trivia and as a way of demoing r³.

That said, let’s take a look at the rails repository (total of 25974 commits):

Now let’s see how django is distributed among committers (total of 12403 commits):

And finally the linux kernel (total of 63226 commits):

It’s worth noting that I excluded every committer that had less than 1% of commits (and more than 0.5% for the linux kernel),  so the percentages are a little off.

Conclusion

It is pretty simple to get r³ to do some cool calculations for us. I got the whole sample in a very short amount of time. It took me more time to write this post than to make r³ calculate the commiter percentages.

Hope you guys come up with some interesting stuff to calculate as well.

People who know me are well aware that I love Open Source with all my heart. I have more than 50 open-source repositories in my github account. Some are maintained, some are not.

This post is not to talk about me, though. It is to talk about how freaking incredibly awesome Open Source is and how people will surprise you every time.

tl;dr

Thumbor is a much better project because of the MANY MANY contributions we have received from the comunity.

I can’t stress enough how incredibly fortunate we are that there are so many VERY SMART people out there willing to contribute back to our project.

That’s why Open-Source will win every time against proprietary software. Because of the people. Keep reading if you want to know more about our story.

The Project

Thumbor is an open-source image operations server. Don’t let this simple description fool you. IT IS powerful. It does INCREDIBLE stuff that saves our company a boatload of money.

When we started the project, the development team decided on open-sourcing the project.

It is general purpose enough as not to require any of our internal information and/or business details to leak.

This decision comes with some trade-offs, that seem very negative at first. A couple questions that came up?

  • How are we going to change the way Thumbor stores images? Do we need to fork the project to have our “company version” of it?
  • How do we load images from our domains only? (Repeat first question proposed solution and rinse)
  • How do we stop attackers from overloading our servers with requests for images of different sizes?
  • How do we stop competitors from using our software to their advantage?
  • How is this any valuable compared to using a proprietary solution (given we have the money to buy it)?

As you can see, there are many questions people came up with NOT to open-source the project.

We decided we would tackle each of those problems when their time came.

The Team

I want to give a brief description of the team behind Thumbor just to clarify why we decided to open-source it even in the face of so many questions.

First, there’s Fábio Costa. He’s a kick-ass developer, committer of the MooTools project and a great colleague. He’s also a BIG supporter of the Open-Source philosophy.

Rafael Carício is also a big-time supporter of Open-Source projects, being committer of Pyvows and many other open-source projects. Recently he spent two days just fixing issues with the default Python interpreter. Pretty awesome if you ask me.

Then, there’s Cezar Sá. Again, an avid Open-Source supporter. He’s committer of Rubinius, an alternative implementation of the Ruby language. He’s the guy behind Thumbor filters architecture.

The Decision

If we were going to open-source Thumbor, we needed to make sure it was as extensible as possible.

Every single part of Thumbor needed to be easily switchable for a different part with the same contract.

This kind of architecture is not simple to build, so we came up with parts that would be general enough so you can start using Thumbor right away.

We also needed to come up with a system to stop people from exploiting Thumbor to generate an infinite number of images and thus overload the server. We came up with encrypted URLs. We don’t believe in security by obscurity either, meaning that even if the software was closed source, people would exploit it.

The company we work for, globo.com, has many, many images (millions) and many users (nearly 5B page views/mo). So we had to make sure Thumbor was up to the task. So we fine tuned it.

The Premises

Ok, so what were our premises for thumbor?

  1. We need everything to be extensible, so we also need to come up with reasonable implementations of the extensible parts;
  2. We need Thumbor to be safe, so we must stick to secure by convention, meaning that if you don’t change a thing, Thumbor is secure;
  3. We need Thumbor to be fast so it can handle many operations per second without requiring expensive clusters

I’m skipping intentionally the main premise which is we want Thumbor to be the best software at cropping images which is what drove us to build it in the first place.

Skip a couple months to the future…

Ok, we have the first version done! Let’s go live with it.

So we fire up our servers and Thumbor is a go. We notice it’s a little slow, but hey, it’s doing its job and we started with a small team of users.

Then the unexpected happens!

Community created issues start popping up! And then they start coming up with CODE ATTACHED.

Now let’s stop for a moment and analyse this. There are MANY companies out there that charge A LOT of money for testing services.

We have FREE skilled testers in our project now. People who are proactively testing it for us and reporting back their findings.

Not only that, they are fixing our software for us and giving us back the code with NO STRINGS ATTACHED.

Let me say this again, these people, highly skilled individuals, all of them WITH JOBS, are working for free in a project they did not start.

This is humans at their best if you ask me!

Extreme Makeover

Remember I said that we’d implemented all the extensible parts and security?

That’s another incredible aspect of Open-Source Software: people READ your code.

People read ours. They found MANY, MANY things to improve/change/add/remove. We are grateful for every single one of them.

The project would not be as good as it is for our users if it wasn’t for the people that are contributing.

Why do I say that Thumbor underwent Extreme Makeover? Because if you look at the first version that we released and how Thumbor is right now there’s no way you would say its the same software.

Through contributions we improved storage, loading, graphics engines, security (A LOT), performance (A LOT) and our software practices.

That’s actually one thing of the process of developing open source software that is very humbling. People pay more attention to software practices like testing and continuous integration when they are trying to get their patches accepted.

And they call on you when you are slipping on your side of the fence. And we got called! And we listened. All of us came out of the process better at our craft.

The Conclusion

Thumbor has already payed for itself many times. It is so useful to us that we don’t care if our competitors use it, as long as the community keeps improving it.

As for buying a proprietary software, I haven’t found a single one that does the same as Thumbor and even if we do, we’ll never get this level of creativity, support and diversity from any given company.

This means if we have to choose again between open or closed source, I think we’ll stay with open source every single time.

HUGE MEGA THANKS WITH RAINBOWS AND UNICORNS

I think I did stress in this post how much I appreciate all the contributions, but I still feel obligated to thank you guys. Your contributions have been incredible and are all INVALUABLE.

So sincere thanks to (in no particular order):

2010 in review

Posted: 2-1-11 in Uncategorized

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Fresher than ever.

Crunchy numbers

Featured image

A helper monkey made this abstract painting, inspired by your stats.

A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 1,500 times in 2010. That’s about 4 full 747s.

 

In 2010, there were 17 new posts, not bad for the first year!

The busiest day of the year was May 26th with 104 views. The most popular post that day was 4×4 Dojo Technique.

Where did they come from?

The top referring sites in 2010 were manicprogrammer.com, twitter.com, heynemann.github.com, flickr.com, and thedevelopersconference.com.br.

Some visitors came searching, mostly for django compressor, bernardo heynemann, django-compressor, django js compressor, and storage module “compressor.storage” does not define a “appsavvycompressorfilestorage” class..

Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

4×4 Dojo Technique May 2010
3 comments

2

Django Compressor – Minify/Reduce Requests June 2010

3

Deming – System of Profound Knowledge and Key Principles July 2010
3 comments

4

Dream Team – Part I – The People July 2010
4 comments

5

About May 2010

Introduction

Jidoka, also known as “intelligent automation” or “automation with a human touch”, is lean’s way of automating repetitive tasks.

This time we join the INews team as they try to define how far to go with automating or not repetitive processes.

Autonomation

John – Hey guys! How was christmas?
Jane – Pretty good! Yours?
John – Really cool. What about the rest of you?
All – It was great!
Christian – We’ve been talking a lot about lean concepts and I’m not that familiar with lean methodology. Whenever I don’t know something I yearn to learn it. I’ve spent all my free time in the last weeks studying it. One thing that comes over and over is autonomation. That is a kick-ass concept!
Jane – Autonomation?
John – Yeah Jane. Autonomation means automation with intelligence.
Jane – What do you mean “with intelligence”?
John – Well, machines lack intelligence, right? That’s why it’s called artificial intelligence. So automation with intelligence means automation with humans involved. It means automating to become more efficient. It means automating well-known repetitive tasks.
Jane – Oh. I see.
Susan – I think we do that already, right? Our build is automated, for one. Oh! Our tests are automated as well! Hmm… I see your point! We could build and test our app ourselves. We just automated it so we are more effective. We didn’t replace ourselves for a machine. We are using it to help us!
John – Exactly. Still, I think we are not aggressive enough with autonomation. Susan, when we finish stories, what do you do to help us accept them?
Susan – I verify the results versus my mock screens to see that you got the proper sizes, margins, etc.
John – And that’s pretty repetitive, isn’t it? That’s something we could come up with a creative way of automating. Joseph, you perform a lot of exploratory testing as well don’t you?
Joseph – Yes I do, but how can you automate exploratory testing, which is by definition human?
John - Hmm… We can’t automate exploratory testing. What we can do is automate the tests you perform every time. We could come up with some strategy to record the tests you do and automate those. This way, every time you did exploratory testing we would end up with a richer testing suite.
Joseph – I see. Well, I guess we could be more aggressive about autonomation.
Christian – So it seems like a team value, doesn’t it? Automating things to improve our effectiveness.
Joseph – Indeed it does, Christian. Indeed it does.

Conclusion

There’s a big emphasis on not automating things with the intent of replacing humans. The goal of jidoka is to make humans more effective and aid them in detecting problems early and often.

Whenever something can be automated to improve the team’s capacity to respond to change, it should be. The automation should not happen before the actual way of doing things is well-known to the people involved. This is paramount so the automation has the proper goal (as outlined above).

Introduction

MI7 is my new pet project. I grew tired of the other mocking/stubbing/spying engines in Python that never did quite what I expected them to.

I’ve been TDDing for a while now in .Net, Ruby, Python and JavaScript. I’ve got my fair share of experience, so I figured I’d give my 2¢ in this issue.

You can check the project at https://github.com/heynemann/mi7/wiki. It’s got a nice tutorial and is currently in 0.1.1 alpha release.

I’ve got a lot of work to do to make it my main tool for test support, but I’m going to get there. Without further delay, let’s get to it.

Why another Spying Engine?

If you check the MI7 wiki you’ll see that I don’t have anything against any single python test support engine. I just haven’t found one that suits my needs and those needs ONLY. IMHO they all do too much. I want a simple, straightforward, fun to use spying engine.

Don’t get me wrong, but I do not believe in mocking in Python. Or stubbing for that matter. Both are akin to dependency injection, IMHO. It just isn’t pythonic.

Python has been around for a while. In this time, there has been a certain Modus Operandi of doing work. This MO has never included injecting your dependencies around. I figured that’s why I feel the weirdness on the part of the mocking/stubbing/spying tools.

With MI7 I’m trying to interfere as less as possible with your code. Production code should be optimized to be production code, and not changed to accommodate your poor testing tools. In the Ruby community they try HARD to make tests and code as clear as possible, not make code work according to tools. I’m trying to get some of that.

The last point of me doing MI7 is to have some fun, and I’m trying to bake in the library as much of that fun as possible, with the spy agency metaphor. Hope you enjoy as much as I am.

Test Sample

Ok, so I’ll write a test with MI7. It’s pretty simple:

from controllers import MyController
from models import User
@new_mission
@agent.spy(User)
def test_user_is_authenticated():
    agents.User.intercept('is_authenticated') \
               .returns(True)
    agents.User.intercept('username') \
               .as_attribute('Bernardo')
    ctrl = MyController()
    result = ctrl.index()
    assert result == "Welcome Bernardo"

So what’s happening here. I’m telling MI7 to keep an eye in the User model, wherever it may be used. Then I’m instructing the User agent (the agents get their code-name from the target they are spying), to intercept calls to is_authenticated and username and return my values.

Now the controller code:

from models import User
class Controller(object):
    def index(self):
        user = User()
        if user.is_authenticated():
            return "Welcome %s" % user.username
        return "Unauthorized"

As you can see, there's not a single line of code in that controller that says "I'm testable". It's just plain old python coding.

Current Status

Currently MI7 supports intercepting modules and classes and telling agents to intercept methods and attributes and to raise exceptions.

Impersonation (stubbing) may come next. Definitely assertions are coming, like what an agent has seen and such.

Conclusion

I’ll keep going with MI7 development as much as I can, because I believe the Python community needs better testing tools and I’m willing to put extra effort into this.

Introduction

The INews team has reached an important milestone. Four of the team’s values are defined and understood.

This time they are talking about a very controversial topic: product ownership.

Who owns it?

John – Hey! How are you all today?
All – Good!
John – I was reading an awesome article last night. It was so good I felt like calling you guys immediately.
Susan – What was it about?
John - About product ownership. The idea here is that you push product decisions as close to the people working with them as possible. The company is responsible for setting the context that allows those people to decide.
Jake – What do you mean by context?
John – The company is responsible for setting strategies and broad scope goals, as well as providing the team with whatever other intel they might need – financial, internal, market share, marketing, you name it. If the team needs that information, they should get it.
Christian –
Ok, I totally agree with that. I’m still wondering about the ownership part, though.
Susan – In the last event I attended, Wackile – agile for the wacky – I saw a brilliant presentation about how projects are killing agile initiatives. Projects are tricky beasts. They have start, finish and handover. At first there’s nothing wrong with that, except there’s no incentive whatsoever for developers to choose long term decisions. They’ll be long gone by the time their decisions affect the product.
John – I see. Well, Chris, what I meant with ownership is that we as a team should be the ones deciding where the product should go. Not someone with no context about the intricacies of the product. What Susan just pointed out just reinforces the need for the people working with the product to feel part of it.
Jane – I couldn’t agree more. As an experience designer I get to decide quite a few things about the product. Sometimes, though, I wish the team had more freedom to choose their own path.
Joseph – I’ll get Daniel here as I believe he’ll be able to tell us whether this value is aligned with the company’s values.

Ok, you guys don’t know Danny, but he’s a great CIO at Acme. He really fights for his teams, in order to provide them with the best possible work environment.

Daniel – What’s up guys? What can I do to help you?
John – Hey Danny. Thanks for joining us in so short notice. The thing here is we decided as a team that one of our values is that we want to own the product in the lean sense that we get to make product decisions…
Daniel – While the company sets the context, right?
Joseph – Right.
Daniel – Perfect. No problems with me. I’ll get our CEO buy-in. As a start is there any intel I can help you with?
Susan – Hi Danny. Actually, there is: Other news companies’ market share on mobile news delivery.
Daniel – I’ll get you guys that info asap. Now I gotta run. See you all. Take care.
All – See ya!

“What a great guy!”, I think to myself.

Joseph – I guess we just got another team value: we own the product and we’ll take care of it thinking about the long run.
Me – IMHO this was the best meeting so far. Danny is the best.
Joseph – That he is, Bernardo. That he is.

Conclusion

The people who are more qualified to make important product decisions are the same ones working with it on a daily basis. They know all of its intricacies and constraints.

Why risk having someone that does not fully understand the issue deal with it?

Yet, most companies keep pulling decisions up in their hierarchies, trying to protect their products from the poor judgement of their employees.

Not trusting the people doing the work to make decisions results in shallow decisions and lack of commitment by the people working with the product. Short-term actions are made, the product evolves in unintended (and bad) ways and eventually people want to get out of the product team. At some point a major redesign and rebuild is needed.

Have you guys ever seen this?