Devoxx Poland 2018

[Photo above shows Mr. Jakub Nabrdalik during his presentation about TDD in main conference room.]

About conference

Thanks to my employer CyberVadis I was able to attend Devoxx Poland 2018 conference. The conference took place in Cracow (Poland) and lasted three days (20 - 22 June).

Overall my experience with the conference was very good. Things that I particularly liked were: good coffee, tasty muffins and beautiful and very comfortable venue (Cracow ICE).

Things that organisers could have done better: on the first day of the conference it was difficult to find the right rooms (especially room no. 5). Rooms should be marked in a more outstanding way. Quality of food served during lunches was only mediocre, I expected something better. Conference offered a lot of presentation still the number of presentation on advanced topics were limited. I expected to see more advanced stuff there.

Conference keynote presentation was amazing. The speaker Mr. Brian Christian told us about application of theoretical computer science stuff to everyday problems. Presentation was based on Mr. Brian book Algorithms to Live By. I already ordered my own copy ;)

My notes from presentations

Below you can find digitalized version of my notes that I took during presentations. Remember that they are my personal notes mostly created for my colleagues that couldn’t attend the conference. Use them at your own risk.

State or events? Which shall I keep? by Jakub Pilimon

  • You can find most of the topics addressed by Mr. Jakub in his blog post Event Storming and Spring with a Splash of DDD.
  • ORM’s can introduce accidental complexity into our applications. Example: loading a list of child objects when we only need to check number of objects (we are just calling .size() on the list) can cause performance bottlenecks when we are using ORM to persist our aggregates.
  • Persisting aggregate state as a list of events is not only simpler but also much more aligned with DDD design.
  • You should prefer returning domain events from aggregate instead of publishing them (no more EventPublishers in aggregates). For example:
public class MyAggregate {
    private final List<DomainEvent> pendingEvents = new ArrayList<>();
    public List<DomainEvent> getPendingEvents() {
        return pendingEvents;
    }
    public void flushEvents() {
        pendingEvents.clear();
    }
    public performOperation() {
        // do some stuff
        NameChangedEvent event = 
           new NameChangedEvent("old-name", "new-name");

        pendingEvents.add(event);
        applyEvent(event);
    }
    // ...
}

public class MyAggregateRepository {
    public MyAggregate load(UUID id) {
        List<DomainEvents> events = eventStore.loadEventsById(id);
        MyAggregate aggr = MyAggregate.recreateFrom(id, events);
        return aggr;
    }
    public void save(MyAggregate aggr) {
        eventStore.appendEvents(aggr.getId(), aggr.getPendingEvents());
        aggr.flushEvents();
    }
}

// later usage:
var aggr = repo.load(id);
aggr.performOperation();
repo.save(aggr);

This method of returning events from aggregate root is nothing new. Variations of this approach appeared as early as in 2013, for example here.

  • (My question after talk) You should not confuse domain events used to store aggregate state from integration events. Generally it is a bad practice to publish to other systems events used to persist aggregate. Mostly because you may expose state that should be private to aggregate.

  • (My question after talk) How to deal with GDPR when using immutable event store? You should try to keep sensitive data outside of domain events. Later I checked that this approach was described here.

As an alternative approach you may use encryption as described here. But for some reason I find this solution ugly.

  • (Someone else’s question) It is fine to use SQL databases to store domain events (at least at the beginning). So if you want to start your adventure with event sourcing use your old tried SQL db!

Overall it was a good presentation but mostly directed at the beginners.

You can find Jakub Pilimon blog and twitter here:

From availability and reliability to chaos engineering. Why breaking things should be practised by Adrian Hornsby

  • Jesse Robbins is one of the fathers of Chaos Engineering (see this video from 2011).
  • Generally we break things in production to build confidence that we can quickly fix real problems.

Introducing Chaos Engineering into organisation:

  • Start small. Only break things in production that you are sure are able to survive your “experiments”.
  • First make your application resilient then test that resiliency. Do NOT do Chaos Engineering experiments in production that you are sure will kill your app.
  • Test not only infrastructure but also people. If say Bob solves 90% of problems on production, check what will happen if other team members must solve such problems without Bob.

My note: I cannot resist myself from calling this technique a “bus monkey”.

  • Remember: Chaos doesn’t cause problems it reveals them
  • Areas to test: people, applications, network and data, infrastructure

Increasing resiliency:

  • Availability is measured in “nines”, e.g. four nines is 99.99%, six nines is 99.9999% of time on which application works.
  • four-nines is an industry standard (as of 2018)
  • Easiest way to increase availability is to create multiple copies of the least reliable resources. If for example you have a single server that has 90% of availability, having two such servers working in parallel will create a system with availability of 99%.
  • When you are hosting your apps in the cloud remember to use multiple availability zones (AWS lingo).
  • Use auto-scaling.
  • Follow infrastructure as code approach to achieve testable infrastructure.
  • Your infrastructure should be immutable. Never update servers always create new instances and remove old one.
  • Practice rolling deployments. After a fresh deploy of an application do NOT kill old servers immediately. Allow them to say for a while as a standby - just in case when your new app version will not work they will allow you to quickly rollback your changes.
  • Never put all your databases on the same server. Use sharding to spread data across many databases.
  • Prefer messaging to HTTP requests, message queues are more resilient. Send commands instead of issuing HTTP requests.
  • When using HTTP remember to use circuit breaker library. Use exponential backoff algorithm.
  • Remember about DNS (DNS failover, smart load balancing).
  • Transient state is a state that is constantly changing, like webpage view counters or ad clicks counters. Do not put transient state into database, instead use specialized solutions like Redis to keep it.
  • Use async UI, do not wait for operation to finish, display notification to the user when operation succeeds or fails. See: Stop Getting In My Way! — Non-blocking UX

Chaos Engineering in Practice:

  • Chaos Engineering is like a fire drills. You are doing it so that people will be prepared for a real fire and will not panic.
  • Without exercises people will get scary when dealing with real issues on production.
  • How to start? Start by defining steady state of your application. In other words define when you assume that your application is working. For example when you have an online shop, you may assume that your application is working when people click “Buy” button a certain number of times per hour. This will be a steady state of your application that you will monitor during chaos engineering experiments to make sure that application is still working.
  • Generally you should use business metric to define steady state. Measure things that bring you money!
  • Define experiments. What if our DB server stops working? What if we have a huge spike in traffic?
  • Only try to break things that you are 100% sure should not break (otherwise make your application resilient first).
  • During chaos experiments always have an emergency stop button so that you can stop your experiment at any time.
  • Start small! (yeah again, looks like this is very important)
  • Experiment with canary deployments e.g. run experiments on 1% of total traffic
  • Quantify results. How much time elapsed from the failure to detection of that failure? How much time did it take to fix it? Is our monitoring working properly? Does notification (pager duty) system works correctly?
  • Never blame a single person, instead concentrate on things that you can improve.

General tips:

  • During post mortems use Rule of 5 Why. Why it broke? Because of X. Why X happened? Because of Y. Why Y happened? … (repeat 5 times)
  • Be patient.
  • Be aware of cultural blocks (especially among business people). Generally do not tell business people that you want to break your application. Remember chaos reveals problems not causes them.

Overall this was a very good presentation with a bit of AWS advertisement.

You can find Adrian Hornsby twitter here:

Through the valley of darkness, the road to microservices by Dominik Boszko

  • Ask yourself if you get any of the benefits that microservice architecture promised you.
  • Signs that you are working with distributed monolith:
    • Change in one microservice propagates to others e.g. I return additional field from some REST endpoint in service A, but may application will start working only when I recompile service B.
    • There is too much communication going on between teams.
    • Change in service A that is managed by my team requires approval from some other team
  • When working with microservices Don’t Repeat Yourself rule is considered harmfull. Microservices should be as independent from each other as possible.
  • Avoid coupling between microservices. Do NOT create libraries with shared REST DTO’s or events that will be used by many services. Instead introduce contracts (use tools like Swagger). Each service should contain it’s own copy of consumed DTO’s and events. These DTO’s should contain only the fields that the service needs.
  • Be wary of Cores, Platforms, Commons etc. libraries that are used by every microservice. Microservices should be independent of each other. In the best case scenario you will only share security and logging related code.
  • Shared libraries are bad because of transitive dependencies. For example team A adds a dependency on SuperXmlParser-v2.0 to the core library. This breaks service B maintained by team B that is using SuperXmlParser-v1.0 library.
  • When using microservice architecture role of software architect changes. Architect no longer can enforce his decisions on teams. Instead teams are self organizing and cross functional and they ultimately decide on the design and architecture of the services that they own.
  • Software architect should concentrate on high level concerns like security, integration with other applications and best practices. Think BIG PICTURE!
  • Software architects should strive to avoid micromanagement (especially when they have only experience with monolithic application development).
  • Do not be afraid to use different technologies and architectural styles for different microservices (this is one of the selling points of the microservice architecture after all).
  • Teams should be aligned with bounded context. Teams should have a deep knowledge of their domains. Therefore teams are best equipped to make architectural decisions. This is the best usage of the top talent that you have hired.
  • Avoid nanoservices (service that does the job of a single method).
  • Do not rush into microservices. Think if you really needs benefits that this approach can offer you. Keep in mind additional complexity that comes with this approach.

This was one of the best presentations that I have seen on Devoxx. It was based on personal experiences of Mr. Dominik. Good Job!

How to impress your boss and your customer in a modern software development company by Wojciech Seliga

  • Modern software house is decentralized. People communicate with each other. Programmers talk to the client. Client can decide whom you should give a rise!
  • A few more attributes of modern software company: short feedback loops, constant improvement (thank to feedback), autonomous teams, decentralized decision making, individual impact (every one feels that his job is important and that this person contributes significantly to the success of company).
  • In other words: Customer becomes the new boss.
  • How to impress your new boss? Deliver great stuff!
  • Check out How Google Works book. It uses term Smart creatives to define modern knowledge workers. Smart creatives are not only tech but also business savvy. They are open, passionate persons that strive to learn something new every day. (My note: this sounds like some old school hackers to me).
  • Smart creatives like to work on interesting problems. They also like to work with interesting people. They often have interesting and colorful lives.
  • Negativity destroys people. It also destroys people’s brains - so please don’t be negative.
  • Check out Pragmatic Thinking and Learning book
  • (My note: more motivational $#@!, I have nothing against it but I think this is not based on any scientific studies. It is just pop-psychology stuff so believe it or not). Success is 10% talent, 90% hard work. Hard work beats talent when talent fails to work hard.
  • Anyone who stops learning is already OLD (my note: I should better learn something from this presentation).
  • Growth mindset. Prise for effort not for results (my note: more pop-psychology).

Questions for job interview that can test if person has growth mindset and is a good candidate to become smart creative:

  • What have you learned in your current job?
  • How do you decide what to learn?
  • What did you learn last month?
  • What do you do to stay up to date with tech?
  • What trends in technology did you miss?
  • What challenges do you expect at your new job?
  • How do you measure yourself and your progress?

Who is a senior developer?

  • Primary role of a senior developer is to teach and mentor others.
  • Senior developer should be a great example to others.
  • Seniors should strive to build better environment for developers (no blaming each other, better dev processes, better and more friendly code review etc.)
  • Being a senior developer is not about doing the same things as mid but faster.

More job interview questions:

  • What could you teach me in 5 minutes?
  • What do you do now differently than in the last 5 years?
  • How did you impact other people?

Back to main topic again:

  • Be the weakest person in the room (so that you will learn quickly).
  • Being the best person in the room is bad for your professional development. You may not learn as much as you can.
  • 1 year in a fast-paced environment is often better than 5+ years of experience in an average company.

More job interview questions:

  • What is your best professional achievement?
  • What is your top strength?
  • What is your weakest point as a professional software developer?
  • What are you doing about it?
  • Why did you choose software development as your career?
  • What are you passionate about?

Back to main topic again:

  • Developers should be ready to take responsibility for the product they build.
  • Developers that are already responsible may take ownership of their product.
  • Passion is very important for smart creatives. Passion gives you intrinsic motivation. Curiosity will force you to learn new things. Desire to change the words will make it happen.
  • Modern software house is sliced vertically. Every team owns parts of the product and can do support, maintenance and development. Also each team can talk directly with the client and make it’s own decisions.

Overall it was an interesting and good presentation but I have a feeling that I just missed the point.

Mr. Wojciech Twitter.

Docs in the self-documenting world by Wojtek Erbetowski

(my note: I was late so I missed a lot of interesting stuff here)

  • Apply UX techniques like personas and user journey mapping to the documentation.
  • Do not put code snippets into documentation. Instead link to a real test or program source code. Especially linking to tests source code is advised, this way documentation will always will be up to date.
  • A lot of tools like AsciiDoc can already do this.
  • Writing a documentation should be a pleasure for developers.
  • Example: library of React components was created as a React application. For each component a short description, real working component, example source code usage and a list of available properties is displayed.
  • This library is actually generated for the source code of the components. So it doesn’t need to be maintained by developers and stays always up to date.
  • Always think for whom you are creating the documentation.

Improving your Test Driven Development in 45 minutes by Jakub Nabrdalik

Common problems with tests:

  • Too low level tests:
    • We have test for every single class in our project.
    • Tests knows too much about code that they are testing. Even minor refactorings of class API result in broken tests.
  • Too high level tests:
    • We only have integration tests.
    • Tests are slow. Generally speaking integration tests that use even in memory DB like H2 are too slow for TDD cycle (people are losing flow).
  • To have effective TDD cycle we should been able to run a single unit test in less than 1 second.

The solution:

  • Group your code into modules. Module API changes more slowly than class API. Test only module public API.
  • A single module contains all application layers e.g. REST controllers, services, command handlers, repositories, database access etc. In other words modules are vertical slices of functionality.
  • Module should be like a microservice.
  • Integration tests are slow. Use integration tests only to test crucial paths through your application. Tests things that bring you money.
  • It is OK to have many assertions in a single integration test.
  • Module should expose a ModuleConfiguration class that can create a ready to use module that uses fakes to interact with other modules and IO.
  • Prefer fakes to mocks and stubs. Use in memory repositories. Avoid doing IO at all costs in unit tests.
  • Test only behaviour of the module using its public API.
  • Use mocks only for interaction with other modules.
  • Checkout this repo https://github.com/olivergierke/sos

Towards better tests:

  • Use business names (ubiquitous language) in the tests (and tests names too).
  • Hide unnecessary information from test code (you should only show information that are relevant for this particular test).
  • For each module create utility classes that provide real-like test data.
  • Create a DSL for your tests.
  • Prefer code to frameworks like Cucumber (business people don’t use it anyway).
  • Spock is a very good testing framework for Java.
  • You may use shared immutable (read only) DB with test data to test e.g. repositories - this way you can speed up your integration tests.

Mr. Jakub Nabrdalik twitter and blog:

If you want to learn more about Mr. Jakub way of doing modules please see his demo project on GitHub.

Modules or Microservices? by Sander Mak

For this presentation I will not provide notes. Instead I am going to express my views on arguments used by Mr. Sander.

Mr. Sander believes that developing applications using microservice architecture is much more difficult than developing a monolithic software. Also Mr. Sander believes that microservice architecture moves so called “wall of unmaintainability” further from us (we may add more features to the application before it becomes unmaintainable in therms of cost). I fully agree with these both statements.

Now the things that I disagree with. Mr. Sander proposes a modular architecture (monolith split into modules) as a silver bullet that will allow us to reap most of the benefits of microservices will still preserving simplicity of development. Unfortunately in his presentation some disadvantages of this approach were not mentioned, like:

  • Slow build and testing. Slow deploy. (It is still a monolith just with a better code organization.)
  • A minor error in a single unimportant module can break entire application. For example large numbers of unclosed files, memory leaks or stackoverflow exceptions can break (depending on used tech stack) even entire application.
  • Implicit shared state between modules like current working directory, PID etc.
  • A modular application will usually use single database technology and a single database server (possibly with read replicas). As writes will be server by a single server instance this may cause severe performance problems in the future.
  • Entire application must be written in the same technology (JVM, .NET, node.js).
  • Usually in monolithic applications people have a kind of Core or Platform (or Commons or whatever) library that contains utils and shared components. This kind of libraries have a tendency to grow uncontrollably and over time they become difficult to change (because any change to them requires a lot of refactoring in several modules). Also quality of these libraries is often substandard (they are usually not actively maintained after they provide functionality needed by their authors - if it works don’t change it approach).

I must admit that nowadays it is much more difficult to develop microservices vs monolith. But I believe this is caused mostly by lack of good frameworks and tools. For example I can imagine that in the future we will have some standard API for logging and monitoring offered by all relevant cloud providers. From my point of view Spring Boot framework looks very promising. Also I believe that we have only just started doing cloud computing (it still a very young and immature technology) and we should see a lot of improvements in the area of distributed system development.

To sum up: using modules is always a good idea. Depending on your requirements sometimes you will want to build a modular monolith, sometimes a bunch of microservices. Context is always the king.

Buzz

  • GraphQL
  • Chaos Engineering
  • A/B Testing
  • Canary releases
  • Infrastructure as Code

marcin-chwedczuk

A Programmer, A Geek, A Human