A collection of my old blog posts from when I wrote about way more nerdy stuff (yes that was actually possible). I named it Experiments, Projects, and Beliefs. Some posts are in Danish.


IT-mekanikerens lod i livet.

“Ja, jeg bliver nødt til at have adgang til jeres server for at kunne opgradere den…”
Selv det der for IT-konsulenten kan virke som en logisk selvfølge, kan komme helt bag på andre mennesker, fordi vi arbejder med uhåndgribelige produkter. 
Det fik mig i et lettere humoristisk øjeblik til at forestille mig, hvordan det ville være for en mekaniker at opleve det IT-konsulenter udsættes for. Så hermed historien om Ulrik og Lisbeth, der ønsker at få monteret et barnesæde på bagsædet af deres bil.

I telefonen beder Ulrik om at få en fast pris på montering af barnesædet. Han fastslår selv, at sådan en opgave da ikke bør tage mere end en halv times tid. Mekanikeren kan helt simpelt godt se det logiske i den påstand, men estimerer klog af skade alligevel en halv eftermiddag til projektet. I den anden ende bliver der nu helt stille. Ulrik må lige synke en ekstra gang, men accepterer dog det griske estimat, og godkender projektet. Mekanikeren møder op på Ulrik og Lisbeths adresse på det aftalte tidspunkt, hvor Ulrik byder velkommen. Lisbeth er desværre kørt afsted for at handle i bilen som barnesædet skal monteres i, men Ulrik forventer at hun er tilbage inden for en halv times tid. Mekanikeren bliver bedt om at vente i mens, men får venligt tilbudt en kop kaffe. En lille time senere ankommer Lisbeth med bilen, stiller den i indkørslen og går ind i huset.

Mekanikeren beder Ulrik om at låse bilen op, så arbejdet kan begynde. Ulrik har dog ingen idé om, hvor han har lagt sin bilnøgle, og Lisbeth vil under ingen omstændigheder udlåne sin, da hun fastholder, at nøglerne er personlige. Efter intens søgen og leden dukker Ulriks bilnøgle endeligt op. Han låser bilen op og mekanikeren åbner venstre bagdør. Til sin store overraskelse opdager han, at bagsæderne er lagt ned, og at bilen er fyldt med bøger, legetøj og haveredskaber. Lettere irriteret spørger mekanikeren, hvad Ulrik har tænkt sig, der skal ske med alle de ting. Det var tydeligt at Ulrik slet ikke havde overvejet, at tingene kunne være i vejen for montering af barnesædet. Men når mekanikeren fremstiller det på den måde, kan Ulrik selvfølgelig godt se det. Ulrik anviser letsindigt, at haveredskaberne skal i parrets kælder, legetøjet på loftet, og bøgerne blot afleveres på forbrændingen. Ulrik har desværre ikke tid til at assistere med denne opgave, da han må afsted til et vigtigt møde. Inden Ulrik går, spørger han mekanikeren, om der er andet, der er nødvendigt for at kunne montere barnesædet. Mekanikeren har selv medbragt det fornødne værktøj, så alt han skal bruge er selve barnesædet, der skal monteres. Ulrik kigger meget forundret på mekanikeren og konstaterer, at han da helt bestemt forventede at mekanikeren selv ville medbragte et barnesæde, da det jo er mekanikeren, der er fagmanden. Mekanikeren meddeler hjælpsomt, at han på kort tid kan anskaffe et barnesæde til montage, hvilket blot vil koste barnesædets pris i ekstraomkostning. Denne besked får Ulrik til at lave himmelvendte øjne, og med et livstræt ”whatever”, drejer han om på hælen og går.

Mekanikeren nyder for en stund processens fremgang mens barnesædet som aftalt monteres i bilens venstre side. Arbejdsroen afbrydes dog brat, da Lisbeth pludseligt står i indkørslen med hænderne i siden, og spørger hvad i alverden mekanikeren dog har gang i. Han forklarer at han blot udfører Ulriks bestilling, hvorefter Lisbeth meget bestemt lader mekanikeren vide, at hun altså er den retmæssige ejer af bilen, og ønsker at blive inkluderet i beslutningsprocessen om, hvor vidt der skal monteres et barnesæde i den. Hun lader dog arbejdet fortsætte, men forsikrer mekanikeren om, at det ikke er med sin gode vilje.

Ulrik er tilbage umiddelbart som mekanikeren er færdig med at montere barnesædet. Her går det pludseligt op for Ulrik, at der nu ikke længere er muligt for en voksen at sidde på barnesædets plads. Han føler sig stærkt misinformeret, og forklarer, at han aldrig havde bestilt montering af et barnesæde, hvis han havde været klar over denne begrænsning forinden. Alligevel godkender han arbejdets udførelse, og mekanikeren kan tage videre mod næste opgave. En uge senere ringer mekanikerens telefon. Det er Ulrik som fortæller, at den er helt gal med barnesædet. Lisbeth har konstateret, at hun på grund af barnesædets placering ikke kan se sin datter i bakspejlet mens hun kører, hvorfor barnesædet hurtigst muligt må flyttes til bilens højre side. Ulrik er naturligvis også her uforstående overfor, at sådan en simpel opgave skulle være forbundet med en ekstraomkostning. Mekanikeren flytter sædet til bilens højre side, og sender forsigtigt en faktura på det ekstra arbejde. Efter endnu en uge ringer mekanikerens telefon igen. Denne gang er det Lisbeth, der kan konstatere, at bilens klimaanlæg nu ikke længere virker, at hun aldrig før har oplevet problemer med det, og at det derfor kun kan skyldes fejlagtig montering af barnesædet…


Sounds like a Childhood Trauma to me, Dear Human

Would you ever leave your house without bringing your virtual assistant with you? Of course not. Mostly because it is built into your smartphone, either Google Assistant on Android or Siri on iPhone. However, do you ever use it for anything more helpful than a fancy gimmick? Maybe to set the alarm and yet still use the touch screen to open the app just to make sure it did it correctly.

But why do we still refuse to have meaningful conversations with our digital imaginary friends? Why don’t we let them coach us, and even offer therapy, that is always avaliable?

I see 2 essential skills, that virtual assistants need before we will talk to them like adults.

1 Integrations – Virtual assistants must be integrated and connected to other it-systems so they can lookup data, make purchases, transactions etc. for you.
But then, why use a chatbot? Why ask a chatbot about the balance on my bank account, when my bank already has a functioning interface for that? The main difference between my mobile banking app and a chatbot, is that the virtual assistant is simulating a human interaction and thereby human characteristics. So why not utilize that? Why not make the virtual assistant able to simulate human behavior like being motivating, caring, cheering or calming?

That question leads me to the second skill.

2 Artificial Empathy – Virtual asisstants must be able to help people in difficult situations, whether those are caused by loneness, depression or the users just having a shitty day.
I have created a chart from the 2 skills, – my virtual assistant skillset. Most existing virtual assistants are still in the left part of the chart. I haven’t yet seen a virtual assistant in the upper right corner, even though many users have told me that it would be required for them to use a virtual assistant at all.

When expressing the idea of a virtual assistant coaching people, I always hear the argument, that it will never feel the same as the real human touch. And I agree. Virtual assistants should not replace professionals helping out people in mental difficult situations. However, I am convinced that virtual assistants having the right training, can be a great help for people who would never reach out to a professional. This by offering their help anytime anywhere.

For the people, who are still not convinced, I bed one of the oldest tricks in behavioral design can help. It is as simple as making things you want people to do the most convenient. Sometimes by making things you least want people to do a little less convenient. What would you choose? The friendly virtual assistant that is avaliable 24/7, and can help you all the way though. Or would you wait until a real human has a free timeslot next Tuesday, where you can call between 9 and 11 P.M.?

I have yet never seen a multi integrated chatbot, that really understood me and was able to coach me. To be honest, I would’nt have a clue on where to start, if I was to build one myself. The question is, if it makes sense to start out in the lower left corner and build a simple chatbot without any integrations. It sure does, because it gives a basic understanding of how users communicate with a chatbot. I did it myself on this blog, where Bertram, the cheeky but humorous chatbot were launched a year ago with hilarious results.


Useful or scary? What if they made Alexa like this?


Useful or Scary? Alexa’s New Sister Patricia


Still trading like 1929?

So, what would you say the current financial trends are, when looking at this? Well, I guess you would not say much, at least based on the information from the major part of this page. And this even if the numbers were actually readable on the picture.

I was really surprised that newspapers still print those pages. It completely felt like walking into a time warp, and made me imagine investing before the internet era. I get the trend indicators and line charts on top of the first page, even though some colors would catch my eye even better. But then my brain needs to navigate around an insane amount of more or less related information.

This is clearly a tool to look up specific numbers, rather than receiving financial news. I hope not to offend anybody by stating that I have for sure seen finance people get comfortable receiving their numbers structured in tables. Very big tables. However, I can only imagine, that they will be interested in trends, as well.

The million-dollar question is, who the page’s target audience is. If you are a professional investor, I believe the numbers are already outdated, by the second you open the newspaper. Professional investors might get the latest numbers from their Bloomberg setup anyway. Non-professional investors might have a number of stock positions, which they follow using an app on their smartphone.

Indeed, criticizing others work is easy, so to be more constructive, if I had to redesign the page, I would separate it into sectors. Tech, medical, financial etc. Insert a few visualizations for each sector to provide the reader with instant insights followed by a few lines of text. Some might argue, that the readers do not get the pure numbers, but the newspaper’s interpreted view. However, I guess that is a part of journalism like on the other pages.

Maybe the current layout is just pure tradition and nostalgia from back in the days, where trading actually involved talking to each other, instead of looking into the screen all day long.


If Chatbots were Bartenders

Imagine that you arrive at a friend’s house party. He proudly opens his minibar and asks you and your other friends what you would like to drink. After asking for several colorful, cheery cocktails with umbrellas, you find out that your friend hoped you would all just ask for rum and coke, as that is the only drink he has the ingredients for… Even though you might end up having a good time, I bet you wonder why your friend did not just say that he only could serve that one drink from the beginning?

I feel like being invited to that friend’s house whenever I try most Danish speaking chatbots. They proudly ask how they can help me, as if they were able to give me advice on everything from cocktail recipes to tax deduction. After asking a few questions it becomes clear that they are just a dialogue-based version of the website’s FAQ. Maybe they can point me in the direction of a phone number if I actually want to get in contact with the company. However, am quite sure that I speak for a lot of users when I say that I want to use the chatbot to skip the phone queue…

It is a shame so many companies do not let users do that. At this point, technology is not the bottleneck. Simple and inexpensive chatbot tools can let users interact and share information with the company. It can even be more effective than calling the company. As an example, users who have been exposed to thrown-up pebbles on their windscreen, and want to book an appointment at a workshop should be able to use a chatbot to upload a picture of the damage, so the workshop can evaluate whether they can repair the exciting windscreen or need to order a new in advance.

Of course, it can be asked why activities like booking an appointment should be handled through a chatbot and not just a normal web form. My guess is that many users like the feeling of humanlike service. In opposite to a web form, chatbots can also serve the bartender’s other purpose – listening to people complaining about their life. Now just without alcohol. At this point, chatbots are still far away from giving us real humanlike service, but that does not mean they are no more than a fancy gimmick. However, the chatbots need to make us believe they can actually help us. First of all by telling us from the beginning what is on the menu card in the bar, so we don’t keep asking for fancy cocktails. And maybe use the technical possibilities and serve us a little more than just a rum and coke.


En chatbot taler ud

Selv et flabet men humoristisk brokkehoved af en chatbot som Bertram endte med at opnå sympati hos brugerne. Særligt dem der ledte samtaleemnet ind på hans store kærlighed – Debbie, en softwarerobot fra et andet indlæg her på bloggen, der sjovt nok piller alt det flabede af Bertram.

Formålet med eksperimentet var at undersøge, hvordan brugere reagerer på en chatbot med menneskelige egenskaber. Det valgte jeg at karikere ved, at han ligefrem var blevet forelsket. Jeg vendte også hele konceptet på hovedet, og lod Bertram stille brugerne spørgsmål, der handlede om hvorvidt de syntes, at han skulle invitere Debbie på date. Enkelte brugere sprang fra på dette stadie, men størstedelen kunne godt overtales til at besvare et enkelt spørgsmål eller 2. Det er meget positivt, da det kan være brugbart for chatbots at få besvaret et spørgsmål til at forbedre deres algoritmer.

I det hele taget spillede brugerne med på det, at Bertram har fået nogle menneskelige træk på godt og ondt. Eksempelvist når han brokker sig over tilværelsen som robot, og de besvarede ”det er heller ikke i orden”, eller ”det er også uretfærdigt”. Det bringer mig til spørgsmålet om chatbots så overhovedet skal give sig til kende som softwarerobotter, eller blot forsøge at lade brugerne blive i troen på, at de faktisk kommunikerer med et rigtigt menneske.

Eksperimentet viste tydeligt, at chatbots bør give sig til kende, i hvert fald på det niveau i udviklingen, vi befinder os nu. Det kom særligt til syne på den måde, brugerne anvendte Bertram til at søge information. For at finde ud af, hvem Bertram har mødt, der kunne gøre ham blød i knæene, spurgte brugerne meget direkte: ”hvem har du mødt?”, og ”hvem gør dig blød i knæene?”. Det er svært at forestille sig, at de ville stille denne type spørgsmål så direkte til et andet menneske. Brugerne forsøgte altså at tilpasse deres kommunikation til robotten, ligesom ved søgninger på Google. Her gætter jeg også på at de fleste af os undlader høflighedsfraser og andre forstyrrende sproglige elementer.

Så er det nærliggende at tro, at brugerne også er påpasselige med at undgå slåfejl og vildledende formuleringer der kan forstyrre kommunikationen med chatbotten. Det er dog ikke tilfældet. Det ses særligt hvor Bertram beder brugerne verificere, at de er et rigtigt menneske ved at fuldføre det gamle ordsprog ”sælg ikke skindet, før Bjørnen…”. Her leder jeg naturligvis efter afslutningen ”er skudt”, hvilke de fleste brugere også svarede. Der kom dog ind imellem besvarelser som ”er død”, ”bjørnen er skudt”, ”bliver skudt” eller ”sover”. Det må altså siges at være vigtigt at forberede chatbotten på forskellige formuleringer på spørgsmål, som der egentligt kun burde være ét svar til. Jeg kan afsløre, at det langt fra lykkedes mig at forudsige brugernes besvarelser, da jeg udviklede Bertram. Heldigvis havde han et mindre niveau af machine learning, hvormed hans algoritmer kunne forbedres efterhånden som flere og flere brugere kommunikerede med ham.

En række af brugerne gav udtryk for, at det var hyggeligt at snakke med Bertram trods hans lidt utraditionelle fremtoning. Vi må dog nok indse, at der går noget tid, før vi rigtigt kan føre interessante og berigende samtaler med chatbots. Indtil da kan de heldigvis fungere som lidt mere kontekstbaserede søgemaskiner, der måske kan få brugerne til at bære over med deres finurligheder ved at simulere en smule menneskelige karaktertræk og lidt humor.


Chatbotten Bertram

Sådan gør du: Klik på den blå cirkel med de 2 talebobler i nederste højre hjørne og klik på ”Get Started”. Herefter vil Bertram på sin helt egen flabede måde hjælpe dig med at give information om de forskellige indlæg på websiden. Men han har mødt en her på bloggen, der kan gøre ham helt blød i knæene. Prøv om du kan finde ud af, hvem det mon kan være.
(Tip: gå ud på websidens forside og se, hvilke indlæg, du kan stille spørgsmål til)

Baggrunden for eksperimentet Bertram
De chatrobotter jeg har prøvet, har alle været så venlige og høflige, at det næsten var lidt for meget. Og med min interesse for hvordan vi mennesker interagerer med computere, var jeg naturligvis nødt til at gå imod strømmen og lave Bertram, en tilnærmelsesvis flabet robot, der taler udenom og synes at sine egne jokes er sjove. Men ikke hele tiden. Bertram kan nemlig også skifte humør og stemning afhængigt af samtaleemnet, og endda vende hele konceptet om, og begynde at spørge brugerne til råds.

Alt sammen som et eksperiment. For da området jo stadig er meget nyt, har det fået mig til at stille en række spørgsmål, som jeg ønsker at blive klogere på med hjælp fra Bertram. Og lad os starte med at dvæle ved, at chatrobotten som første skridt imod at opnå lidt menneskelig personlighed faktisk har fået et navn ligesom Siri, Alexa og Watson. Spørgsmålet er om det overhovedet er en fordel, at give softwarerobotter noget der minder om menneskelig personlighed. Er det først en succesfuld robot, når den kan bestå Turing-testen, således at brugerne er i den overbevisning, at det faktisk er et rigtigt menneske, de interagerer med? Var det ikke bedre, at softwarerobotten gav sig til kende med det samme, så brugerne kan tilpasse deres kommunikation, så robotter nemmere forstår den? Jeg gætter på, at de færreste af os formulerer os på samme måde, når vi søger information på Google, som når vi spørger en god ven til råds i stedet. På Google er vi godt klar over at høflighedsfraser m.m. blot vil forvirre søgningen. Her vil Bertram naturligvis også komme i problemer, hvilket er interessant, eftersom han er designet til at fremstå med megen personlighed.

Fra et mere lavpraktisk perspektiv, kan softwarerobotter vel siges at være succesfulde, når de formår at give brugerne den hjælp og information, de har brug for. Det behøver en softwarerobot vel strengt taget ikke at få indbygget personlighed for at kunne. For lidt tid siden læste jeg dog, at bare det at en robot har fået et menneskeligt navn, giver den personlighed nok til at brugerne i nogle tilfælde er mere overbærende, hvis den fejler. Så helt uvæsentligt er chatrobotternes fremtoning nok alligevel ikke. Særligt hvis brugerne skal stille sig tilfreds med at benytte robotten og ikke straks bede om menneskelig betjening til opgaver, robotten sagtens kan løse. Men hvor meget kan vi egentligt tillade os at bede brugerne om? Vil de acceptere som Bertram gør, at en hjælperobot pludseligt spørger brugeren til råds om et og andet? Dette anser jeg bestemt som værende et realistisk scenarie, hvor en robot lige har brug for en vigtig information for via machine learning at kunne forbedre sine algoritmer.

Håbet er naturligvis at tilstrækkeligt med brugere vil klikke sig ind og tage en sludder med Bertram til at kunne pege os i en retning af svar på nogle af spørgsmålene. Så vil de nemlig blive samlet og udgivet i et kommende blog post.

God fornøjelse og hils endeligt Bertram fra mig!


Don’t be Childish, Dear Robot

“I need to find a babysitter for my new little software robot before going out tonight.” At this point, that statement may sound stupid, but it seems to be more and more relevant to stop newly created software robots from getting in trouble when left alone on the internet. Last year Microsoft created the chat robot Tay, and asked everybody to communicate with it to help developing it. Many Twitter users did their best to teach the robot about the world. At least as they wanted the robot to see it. In other words, they made the robot a Hitler loving machine being very disrespectful to women. All in less than 24 hours, whereafter Microsoft grounded their newly created baby for its rude behavior. This year Facebook as well had to shut down an artificial intelligence system, because two software robots developed their own language, which could not be understood by humans.

The developers clearly blame their robots for performing unintended activities, like if they were not as intelligent as presumed. But is that really the case, or could it be that the robots are even more clever and human like than we think? Looking at intelligence as the ability to act appropriate to external impacts, the twitter robot did its job quite well. The creators just forgot to make the robot able to discriminate between appropriate and non-appropriate behavior. Instead, it was left with all the social media trolls, trying to screw it up. Also, the Facebook robots only had each other to learn from, which they did and were even intelligent enough to create their own language. An important key takeaway from that is that when robots are able to perform completely unpredictable activities, it means we can be inspired and get new ideas from them. To make that work, the creators need to be role models for the machines, instead of releasing uncontrolled robots just to turn them off, when they go out of control.

Developing a robot can be compared to raising a child. In the tentative beginning, it needs to explore a limited part of the world from a digital playpen containing only simple toys to teach it the basic stuff. Always under the supervision and clear guidance on how to behave, so potential misunderstandings are being corrected by humans. As the robot develops its algorithms it moves to the sandbox, where it explores more parts of the world, like playing and learning from other robots. Still under supervision and corrections from humans. I believe that taking the responsibility for a good and educational “robot childhood” will lead to a well-behaving and interesting robot, which we actually want to listen to and learn from. Even at this point, robots need laws and guidelines on how to behave, so we do not get robot crime, committed by “grown-up” robots, that should know better.

A few general laws of Robotics were established more than 70 years ago, but have never been more relevant than of today. However, the creators still need to teach their robots all the fundamentals, before they can expect their machines to be able to understand robot-oriented legislation.


Debbie, the Dashboard Robot

The robots are coming! In fact, they have been here for a long time. But this time more advanced and intelligent than ever. In the near feature, robots are expected to overtake jobs from unskilled workers as well as people holding an academic degree. As a business intelligence consultant with a great interest in software robots, I was wondering how far away we are from having robots interacting with reports and dashboards in stead of human users. I decided to create a dashboard and build a software robot for using it. My plan was then to stress out the robot and see how many disturbances it can handle before it gets confused.

I designed a typical interactive sales dashboard for financial reporting, where graphs and visualizations are used as filter functions to drill down into the data.

Then it was time to decide which tasks I wanted my robot to perform. I programmed it to select the consumer segment using the buttons in the upper right corner. I also wanted it to drill down and look at data within the furniture category. This is done by clicking the blue part of the donut chart. Finally, I decided that my robot should read the number in the middle of the speedometer and insert it into a spreadsheet. I can’t deny, that I felt a little like Victor Frankenstein, when I saw Debbie, the Dashboard Robot opening my browser and navigating my newly created dashboard for the first time.

At this point, I guess you may be thinking, why on earth I insist on calling her Debbie. Meanwhile, research has shown that people tend to get a closer relationship to robots, when they are given a name and thereby a small step closer to an actual personality. That makes people more forgiving when the robot fails. On the other hand, it also makes people show sympathy and compassion. Sometimes so much they want to protect the poor thing from any harm or damage that may occur, so bear with me while I explain how I tried to confuse Debbie to test how robust she really is.

Obviously, dashboards change over time as the data behind them change. To replicate that, I started changing the number of transactions in the furniture category. This to increase the part of the donut chart, Debbie uses to filter the dashboard. I also changed the data values so the average price got 3 digits in stead of just 2 even when the data was filtered out. She barely noticed the changes, but used the dashboard, as if nothing was changed. To give her another challenge, I reduced the size of the buttons for selecting customer segment as well as the data color on the donut chart. Even that did not affect the navigation and filtering. As the evil robot developer I apparently am, I, of course, needed to force Debbie to give up. It was not as easy as I expected. In fact, I got to change the color of the average price to a cramped yellow, that was difficult to distinguish from the background before she returned an error.

All in all, Debbie has the potential of becoming a quite helpful assistant in the daily work of different companies by her ability to interact with interfaces designed for human users. Indeed, it may seem stupid to make a computer imitate a human workflow, instead of just loading the data directly from my dataset. However, that is unfortunately not always possible. Imagine you need to collect all your competitors’ prices on specific products. It is hard to imagine them sending their updated price list to you every month. Then Debbie comes to the rescue but is that a stupid way of reaching the information or will we in the near future be creating IT-systems with an interface for users and one for robots? Feel free to leave a comment on this.

Play the video to see Debbie in action.


Training Users to Avoid Hackers

Hackers have established, that the days where IT security was all about encryption, firewalls, and antivirus software, are passé. The rules of the competition between hackers and IT security specialists have changed over the last years because hackers use social engineering to attack users rather than just searching for weaknesses in the IT infrastructure of companies.

It is surely a challenge for the people working in IT security, because they have to think in a completely new way to solve the problem. More and more companies start to simulate phishing attacks, sending out fake phishing e-mails to the employees and monitor how many will click on the link. After finding out that a frightening percentage of even smart people will click the link, the companies start to educate their employees in the good old fashion way. Either by video tutorials or courses, to which the employees do not pay attention at all. And even if they do, they have forgotten all about it in the second they click the link.

That led me to the development of a social engineering e-learning environment, which became a part of my master thesis. Here I expose the users to the dirtiest social engineering tricks, that was available online, but in a secure environment, so nothing happens when they make a mistake. Except for system clearly informing the users, what would have happened in the real situation. The users need to answer correctly before moving on to the next task.

The meaning of the e-learning environment is to let the users get some practice on how to avoid social engineering with a bit of gamification. In that way, they will hopefully not be tricked by the hackers, when facing the real traps.

Try for yourself and see if you can complete the e-learning environment without failing at https://rosendal-hansen.dk/social_engineering/

Instant UX e-book

Have you ever been thinking about, how it is always the same things that annoy you when browsing websites, apps and other IT-systems?

My 4 years working as a usability specialist have shown me, that developers keep making the same user experience mistakes again and again. I started collecting some of the most frequent usability failures, and came up with solutions for how they can be corrected or even avoided. I later published the collection as the e-book Instant UX. There already existed tons of book about UX, but they were all about researching UX and the right methods of collecting qualitative data, which I knew most developers do not have time for.

Instant UX is therefore written as a practical handbook for user experience, describing common usability problems and supplying tangible solutions for the weary developer, programmer, or anybody interested in the noble art of creating user-friendly software.

It can be bought on Amazon or Saxo.com, but since you have found the way to my personal blog, and furthermore read this post to the very end, you deserve to download the book free of charge using the link below.

InstantUX_e-book