I was chatting over drinks with a buddy of mine (All Things All Things, aka Joe Stein) the other day, and we both agreed that we were annoyed with open source frameworks that seemed like they were built by people that never had written applications using said frameworks, and sometimes by people that seemed to have never developed applications at all. I've been both an application developer and a framework developer, and I can say without question the worst job I've ever done with a codebase was the case of working on a framework that I never used and didn't originate myself. Why does this happen? I'm a good developer, but I'm not immune to the common pitfalls of framework/library development.
Pitfall 1: Never running a feature in a real application
I think this is a very common problem of frameworks developed by people that aren't actively using them. You think of a cool feature, or maybe some user asks you for one, and you spec it out and implement it. You hopefully write some good unit and integration tests, and everything seems to work. But of course, you neglected to test things like what happens when the whole system is rebooted and the state of this feature changes. Especially with certain kinds of features you can build it half right and have it silently fail for a long time before anyone notices. Quotas in ZooKeeper are an excellent example of this: a monitoring feature that worked until the quota was written to snapshot, and didn't seem to be used by any of the maintainers of the project. (cf this not very descriptive jira)
Pitfall 2: Never having to test application code that uses this framework
I'm hitting this a bit in my usage of the Play framework. It's a framework that did have a lot of testing features built into it but... they neglected to implement Filterable in their Junit runner, so you can't run a single test out of a class in your IDE. I submitted a fix for this feature a few weeks ago that has been withering on the vine, despite the fact that this is an incredibly annoying thing to overlook and a trivial thing to fix. The framework also doesn't support changing the http port on the command line when running tests automatically. Why would you need to, unless you happen to have a code base with several active branches in development that are also being automatically tested as I do right now. The framework developers may never get bitten by this, but it's definitely an annoyance as an application developer using the framework.
Pitfall 3: Throwing in everything and the kitchen sink
I recently saw a retweet asking why the hell Guava would add an Event Bus feature. Does that really belong in a collections framework? When your whole life is the framework you're developing, sometimes no feature seems too small or too unrelated. Unfortunately, putting in too much for the sake of completeness can make your code harder for application developers to fully grasp. If I have several different subtle variations of a method, with slightly different argument lists, I have to constantly check the javadoc and stop to think every time I try to use your library I'm likely to use it less, or just find one way to do it and always do it that way. I will, and have, rejected libraries in the past on the basis of being overly feature-laden. I don't always want or need complexity, and I'd frequently rather work around a small missing element than spend my life searching for exactly the method I want to call.
Pitfall 4: Making your library difficult to read and debug through
When you coat everything in layers upon layers of indirection, reflection, deeply nested interface hierarchies, and painful call graphs, it's hard for your users to figure out what the hell is actually going to happen, and painful to debug through the code when something goes wrong. I can't possibly be the only developer that learns libraries half by reading the documentation, and half by just calling the method that seems right and reading through the code when it doesn't work. This is largely why I absolutely despise Fluent-style development. When it is done perfectly and just works (as in perhaps the case of something like Mockito), it's verbose but acceptable. When it's in a place where there are lots of links in the chain where something could go wrong, it is an absolute nightmare to read and debug. I'm keeping the call stack of my own application in my head, please make your library as easy as possible for me to add to that mental complexity.
The best way to get over most of these pitfalls is to have at least one person on your framework team that actually uses the framework you're developing for something else. Barring that, listen to your users carefully. When they are confused by how to figure out what to call, frustrated by the difficult of debugging, or complaining about the difficulty of testing your framework, these aren't problems to treat lightly. Remember, your framework succeeds or fails based not on it's own internal merits, but on how many people actually use it to develop other code. Application developers are a framework developer's best friend.
Pitfall 1: Never running a feature in a real application
I think this is a very common problem of frameworks developed by people that aren't actively using them. You think of a cool feature, or maybe some user asks you for one, and you spec it out and implement it. You hopefully write some good unit and integration tests, and everything seems to work. But of course, you neglected to test things like what happens when the whole system is rebooted and the state of this feature changes. Especially with certain kinds of features you can build it half right and have it silently fail for a long time before anyone notices. Quotas in ZooKeeper are an excellent example of this: a monitoring feature that worked until the quota was written to snapshot, and didn't seem to be used by any of the maintainers of the project. (cf this not very descriptive jira)
Pitfall 2: Never having to test application code that uses this framework
I'm hitting this a bit in my usage of the Play framework. It's a framework that did have a lot of testing features built into it but... they neglected to implement Filterable in their Junit runner, so you can't run a single test out of a class in your IDE. I submitted a fix for this feature a few weeks ago that has been withering on the vine, despite the fact that this is an incredibly annoying thing to overlook and a trivial thing to fix. The framework also doesn't support changing the http port on the command line when running tests automatically. Why would you need to, unless you happen to have a code base with several active branches in development that are also being automatically tested as I do right now. The framework developers may never get bitten by this, but it's definitely an annoyance as an application developer using the framework.
Pitfall 3: Throwing in everything and the kitchen sink
I recently saw a retweet asking why the hell Guava would add an Event Bus feature. Does that really belong in a collections framework? When your whole life is the framework you're developing, sometimes no feature seems too small or too unrelated. Unfortunately, putting in too much for the sake of completeness can make your code harder for application developers to fully grasp. If I have several different subtle variations of a method, with slightly different argument lists, I have to constantly check the javadoc and stop to think every time I try to use your library I'm likely to use it less, or just find one way to do it and always do it that way. I will, and have, rejected libraries in the past on the basis of being overly feature-laden. I don't always want or need complexity, and I'd frequently rather work around a small missing element than spend my life searching for exactly the method I want to call.
Pitfall 4: Making your library difficult to read and debug through
When you coat everything in layers upon layers of indirection, reflection, deeply nested interface hierarchies, and painful call graphs, it's hard for your users to figure out what the hell is actually going to happen, and painful to debug through the code when something goes wrong. I can't possibly be the only developer that learns libraries half by reading the documentation, and half by just calling the method that seems right and reading through the code when it doesn't work. This is largely why I absolutely despise Fluent-style development. When it is done perfectly and just works (as in perhaps the case of something like Mockito), it's verbose but acceptable. When it's in a place where there are lots of links in the chain where something could go wrong, it is an absolute nightmare to read and debug. I'm keeping the call stack of my own application in my head, please make your library as easy as possible for me to add to that mental complexity.
The best way to get over most of these pitfalls is to have at least one person on your framework team that actually uses the framework you're developing for something else. Barring that, listen to your users carefully. When they are confused by how to figure out what to call, frustrated by the difficult of debugging, or complaining about the difficulty of testing your framework, these aren't problems to treat lightly. Remember, your framework succeeds or fails based not on it's own internal merits, but on how many people actually use it to develop other code. Application developers are a framework developer's best friend.