{ "version": "https://jsonfeed.org/version/1", "title": "Henri Bergius", "description": "Hacker and an occasional adventurer. Author of Create.js and NoFlo, founder of Flowhub UG. Decoupling software, one piece at a time. This blog tells the story of that.", "home_page_url": "https://bergie.iki.fi", "icon": "https://bergie.iki.fi/style/img/mdpi/bergie_wetplate.jpg", "feed_url": "https://bergie.iki.fi/blog/feed.json", "expired": false, "items": [ { "id": "https://bergie.iki.fi/blog/fbp-ai-human-collaboration/", "url": "https://bergie.iki.fi/blog/fbp-ai-human-collaboration/", "title": "Flow-Based Programming, a way for AI and humans to develop together", "content_html": "
I think by now everybody reading this will have seen how the new generation of Large Language Models like ChatGPT are able to produce somewhat useful code. Like any advance in software development—from IDEs to high-level languages—this has generated some discussion on the future employment prospects in our field.
\n\nThis made me think about how these new tools could fit the world of Flow-Based Programming, a software development technique I’ve been involved with for quite a while. In Flow-Based Programming these is a very strict boundary between reusable “library code” (called Components) and the “application logic” (called the Graph).
\n\nHere’s what the late J. Paul Morrison wrote on the subject in his seminal work, Flow-Based Programming: A New Approach to Application Development (2010):
\n\n\n\n\nJust as in the preparation and consumption of food there are the two roles of cook and diner, in FBP application development there are two distinct roles: the component builder and the component user or application designer.
\n
\n\n\n…The application designer builds applications using already existing components, or where satisfactory ones do not exist s/he will specify a new component, and then see about getting it built.
\n
Remembering that passage made me wonder, could I get one of the LLMs to produce useful NoFlo components? Armed with New Bing, I set out to explore.
\n\n\n\nThe first attempt was specifying a pretty simple component:
\n\n\n\nThat actually looks quite reasonable! I also tried asking New Bing to make the component less verbose, as well as generating TypeScript and CoffeeScript variants of the same. All seemed to produce workable things! Sure, there might be some tidying to do, but this could remove a lot of the tedium of component creation.
\n\nIn addition to this trivial math component I was able to generate some that to call external REST APIs etc. Bing was even able to switch between HTTP libraries as requested.
\n\nWhat was even cooler was that it actually suggested to ask it how to test the component. Doing as I was told, the result was quite astonishing:
\n\n\n\nThat’s fbp-spec! The declarative testing tool we came up with! Definitely the nicest way to test NoFlo (or any other FBP framework) components.
\n\nBased on my results, you’ll definitely want to check the generated components and tests before running them. But what you get out is not bad at all.
\n\nI of course also tried to get Bing to produce NoFlo graphs for me. This is where it stumbled quite a bit. Interestingly the results were better in the fbp language than in the JSON graph format. But maybe that even more enforces that the sweet spot would be AI writing components and a human creating the graphs that run those.
\n\n\n\nAs I’m not working at the moment, I don’t have a current use case for this way of collaborating. But I believe this could be a huge productivity booster for any (and especially Flow-Based) application development, and expect to try it in whatever my next gig ends up being.
\n\nIllustrations: MidJourney, from prompt Robot software developer working with a software architect. Floating flowcharts in the background
\n", "date_published": "2023-03-20T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/28a14660-c698-11ed-8b42-09bd596b6d87Robot%20software.png", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/electronic-logbook/", "url": "https://bergie.iki.fi/blog/electronic-logbook/", "title": "Keeping a semi-automatic electronic ship's logbook", "content_html": "Maintaining a proper ship’s logbook is something that most boats should do, for practical, as well as legal and traditional reasons. The logbook can serve as a record of proper maintenance and operation of the vessel, which is potentially useful when selling the boat or handling an insurance claim. It can be a fun record of journeys made to look back to. And it can be a crucial aid for getting home if the ship’s electronics or GNSS get disrupted.
\n\nLike probably most operators of a small boat, on Lille Ø our logbook practices have been quite varying. We’ve been good at recording engine maintenance, as well as keeping the traditional navigation log while offshore. But in the more hectic pace of coastal cruising or daysailing this has often fallen on the wayside. And as such, a lot of the events and history of the boat is unavailable.
\n\nTo redeem this I’ve developed signalk-logbook, a semi-automatic electronic logbook for vessels running the Signal K marine data server.
\n\nThis allows logbook entries to be produced both manually and automatically. The can be viewed and edited using any web-capable device on board, meaning that you can write a log entry on your phone, and maybe later analyse and print them on your laptop.
\n\nSignal K is a marine data server that has integrations with almost any relevant marine electronics system. If you have an older NMEA0183 or Seatalk system, Signal K can communicate with it. Same with NMEA2000. If you already have your navigational data on the boat WiFi, Signal K can use and enrich it.
\n\nThis means that by making the logbook a Signal K plugin, I didn’t have to do any work to make it work with existing boat systems. Signal K even provides a user interface framework.
\n\nThis means that to make the electronic logbook happen, I only had to produce some plugin JavaScript, and then build a user interface. As I don’t do front-end development that frequently, this gave me a chance to dive into modern React with hooks for the first time. What better to do after being laid off?
\n\nSignal K also has very good integration with Influx and Grafana. These can record vessel telemetry in a high resolution. So why bother with a logbook on the side? In my view, a separate logbook is still valuable for storing the comments and observations not available in a marine sensor network. It can also be a lot more durable and archivable than a time series database. On Lille Ø we run both.
\n\nThe signalk-logbook comes with a reasonably simple web-based user interface that is integrated in the Signal K administration UI. You can find it in Web apps
→ Logbook
.
The primary view is a timeline. Sort of “Twitter for your boat” kind of view that allows quick browsing of entries on both desktop and mobile.
\n\n\n\nThere is also the more traditional tabular view, best utilized on bigger screens:
\n\n\n\nWhile the system can produce a lot of the entries automatically, it is also easy to create manual entries:
\n\n\n\nThese entries can also include weather observations. Those using celestial navigation can also record manual fixes with these entries! Entries can be categorized to separate things like navigational entries from radio or maintenance logs.
\n\nIf you have the sailsconfiguration plugin installed, you can also log sail changes in a machine-readable format:
\n\n\n\nSince the log format is machine readable, the map view allows browsing entries spatially:
\n\n\n\nThe big benefits of an electronic logbook are automation and availability. The logbook can create entries by itself based on what’s happening with the vessel telemetry. You can read and create log entries anywhere on the boat, using the electronic devices you carry with you. Off-vessel backups are also both possible, and quite easy, assuming that the vessel has a reasonably constant Internet connection.
\n\nWith paper logbooks, the main benefit is that they’re fully independent of the vessel’s electronic system. In case of power failure, you can still see the last recoded position, heading, etc. They are also a lot more durable in the sense that paper logbooks from centuries ago are still fully readable. Though obviously that carries a strong survivorship bias. I would guess the vast majority of logbooks, especially on smaller non-commercial vessels, don’t survive more than a couple of years.
\n\nSo, how to benefit from the positive aspects of electronic logbooks, while reducing the negatives when compared to paper? Here are some ideas:
\n\nIn addition to providing a web-based user interface, signalk-logbook provides a REST API. This allows software developers to create new integrations with the logbook. For example, these could include:
\n\nTo utilize this electronic logbook, you need a working installation of Signal K on your boat. The common way to do this is by having a Raspberry Pi powered by the boat’s electrical system and connected to the various on-board instruments.
\n\nThere are some nice solutions for this:
\n\nYou can of course also do a more custom setup, like we did on our old boat, Curiosity.
\n\nFor the actual software setup, marinepi-provisioning gives a nice Ansible playbook for getting everything going. Bareboat Necessities is a “Marine OS for Raspberry Pi” that comes with everything included.
\n\nIf you have a Victron GX device (for example Cerbo GX), you can also install Signal K on that.
\n\nOnce Signal K is running, just look up signalk-logbook
in the Signal K app store. You’ll also want to install the signalk-autostate
and sailsconfiguration
plugins to enable some of the automations.
Then just restart Signal K, log in, and start logging!
\n", "date_published": "2023-03-06T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/logbook-logbook.png", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/baltic-shakedown-cruise/", "url": "https://bergie.iki.fi/blog/baltic-shakedown-cruise/", "title": "Shakedown cruise on the Baltic Sea", "content_html": "Just in time for a new cruising season to start, the story of our 2021 Baltic shakedown cruise is now online.
\n\n\n\n
This was a 666NM trip that we did on our new-to-us Amigo 40 cruising boat in August-September 2021. Apart from engine trouble in the beginning, this was a very enjoyable little adventure on the coasts of Sweden and Bornholm.
\n\n\n\nThe trip even earned us the first prize in the cruising log contest of our sailing club:
\n\n\n\n\n", "date_published": "2022-04-07T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/20210903_140900.jpg", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/signalk-boat-iot/", "url": "https://bergie.iki.fi/blog/signalk-boat-iot/", "title": "Cruising sailboat electronics setup with Signal K", "content_html": "I haven’t mentioned this on the blog earlier, but in the end of 2018 we bought a small cruising sailboat. After some looking, we went with a Van de Stadt designed Oceaan 25, a Dutch pocket cruiser from the early 1980s. S/Y Curiosity is an affordable and comfortable boat for cruising with 2-4 people, but also needed major maintenance work.
\n\n\n\nThe refit has so far included osmosis repair, some fixes to the standing rigging, engine maintenance, and many structural improvements. But this post will focus on the electronics and navigation aspects of the project.
\n\nWhen we got it, the boat’s electrics setup was quite barebones. There was a small lead-acid battery, charged only when running the outboard. Light control was pretty much all-or-nothing, either we were running inside and navigation lights, or not. Everything was wired with 80s spec components, using energy-inefficient lightbulbs.
\n\nLooking at the state of the setup, it was also unclear when the electrics had been used for anything else than starting the engine last time.
\n\nBefore going further with the electronics setup, all of this would have to be rebuilt. We made a plan, and scheduled two weekends in summer 2019 for rewiring and upgrading the electricity setup of the boat.
\n\nFirst step was to test all existing wiring with a multimeter, and label and document all of it. Surprisingly, there were only couple of bad connections from the main distribution panel to consumers, so for most part we decided to reuse that wiring, but just with a modern terminal block setup.
\n\n\n\nFor most part we used a dymo label printer, with the labels covered with a transparent heat shrink.
\n\nWe replaced the old main control panel with a modern one with the capability to power different parts of the boat separately, and added some 12V and USB sockets next to it.
\n\n\n\nAll internal lighting was replaced with energy-efficient LEDs, and we added the option of using red lights all through the cabin for preserving night vision. A car charger was added to the system for easier battery charging while in harbour.
\n\nWith this, we had a workable lighting and power setup for overnight sailing. But next obvious step will be to increase the range of our boat.
\n\nFor that, we’re adding a solar panel. We already have most parts for the setup, but are still waiting for the customized NOA mounting hardware to arrive. And of course the current COVID-19 curfews need to lift before we can install it.
\n\nUntil we have actual data from our Victron MPPT charge controller, I’ve run some simulations using NASA’s insolation data for Berlin on how much the panel ought to increase our cruising range.
\n\n\n\nThe basis for boat navigation is still the combination of a clock, a compass, and a paper chart (as well as a sextant on the open ocean). However, most modern cruising boats utilize some electrical tools to aid the process of running the boat. These typically come in form a chartplotter and a set of sensors to get things like GPS position, speed, and the water depth.
\n\nCommercial marine navigation equipment is a bit like computer networking in the 90s - everything is expensive, and you pretty much have to buy the whole kit from a single vendor to make it work. Standards like NMEA 0183 exist, but “embrace and extend” is typical vendor behaviour.
\n\nBeing open source hackerspace people, that was obviously not the way we wanted to do things. Instead of getting locked into an expensive proprietary single-vendor marine instrumentation setup, we decided to roll our own using off-the-shelf IoT components. To serve as the heart of the system, we picked Signal K.
\n\nSignal K is first of all a specification on how marine instruments can exchange data. It also has an open source implementation in Node.js. This allows piping in data from all of the relevant marine data buses, as well as setting up custom data providers. Signal K then harmonizes the data, and makes it available both via modern web APIs, and in traditional NMEA formats. This enables instruments like chartplotters also to utilize the Signal K enriched data.
\n\nWe’re running Signal K on a Raspberry Pi 3B+ powered by the boat battery. With a GPS dongle, this was already enough to give some basic navigation capabilities like charts and anchor watch. We also added a WiFi hotspot with a LTE uplink to the boat.
\n\n\n\nTo make the system robust, installation is automated via Ansible, and easy to reproduce. Our boat GitHub repo also has the needed functionality to run a clone of our boat’s setup on our laptops via Docker, which is great when developing new features.
\n\nSignal K has a very active developer community, which has been great for figuring out how the extend the capabilities of our system.
\n\nWe’re using regular tablets for navigation. The main chartplotter is a cheap old waterproof Samsung Galaxy Tab Active 8.0 tablet that can show both the Freeboard web-based chartplotter with OpenSeaMap charts, and run the Navionics Boating app to display commercial charts. Navionics is also able to receive some Signal K data over the boat WiFi to show things like AIS targets, and to utilize the boat GPS.
\n\n\n\nAs a backup we have our personal smartphones and tablets.
\n\n\n\nInside the cabin we also have an e-ink screen showing the primary statistics relevant to the current boat state.
\n\n\n\nMonitoring air pressure changes is important for dealing with the weather. For this, we added a cheap barometer-temperature-humidity sensor module wired to the Raspberry Pi, driven with the Signal K BME280 plugin. With this we were able to get all of this information from our cabin into Signal K.
\n\nHowever, there was more environmental information we wanted to get. For instance, the outdoor temperature, the humidity in our foul weather gear locker, and the temperature of our icebox. For these we found the Ruuvi tags produced by a Finnish startup. These are small weatherproofed Bluetooth environmental sensors that can run for years with a coin cell battery.
\n\n\n\nWith Ruuvi tags and the Signal K Ruuvi tag plugin we were able to bring a rich set of environmental data from all around the boat into our dashboards.
\n\nLike every cruising boat, we spend quite a lot of nights at anchor. One important safety measure with a shorthanded crew is to run an automated anchor watch. This monitors the boat’s distance to the anchor, and raises an alarm if we start dragging.
\n\nFor this one, we’re using the Signal K anchor alarm plugin. We added a Bluetooth speaker to get these alarms in an audible way.
\n\nTo make starting and stopping the anchor watch easier, I utilized a simple Bluetooth remote camera shutter button together with some scripts. This way the person dropping the anchor can also start the anchor watch immediately from the bow.
\n\n\n\nAutomatic Identification System is a radio protocol used by most bigger vessels to tell others about their course and position. It can be used for collision avoidance. Having an active transponder on a small boat like Curiosity is a bit expensive, but we decided we’d at least want to see commercial traffic in our chartplotter in order to navigate safely.
\n\nFor this we bought an RTL-SDR USB stick that can tune into the AIS frequency, and with the rtl_ais software, receive and forward all AIS data into Signal K.
\n\n\n\nThis setup is still quite new, so we haven’t been able to test it live yet. But it should allow us to see all nearby bigger ships in our chartplotter in realtime, assuming that we have a good-enough antenna.
\n\nAll together this is quite a lot of hardware. To house all of it, we built a custom backing plate with 3D-printed brackets to hold the various components. The whole setup is called Voronoi-1 onboard computer. This is a setup that should be easy to duplicate on any small sailing vessel.
\n\n\n\nThe total cost so far for the full boat navigation setup has been around 600€, which is less than just a commercial chartplotter would cost. And the system we have is both easy to extend, and to fix even on the go. And we get a set of capabilities that would normally require a whole suite of proprietary parts to put together.
\n\nWe of course have plenty of ideas on what to do next to improve the navigation setup. Here are some projects we’ll likely tackle over the coming year:
\n\nIf you have ideas for suitable components or projects, please get in touch!
\n\nHuge thanks to both the Signal K and Hackerfleet communities and the Curiosity crew for making all this happen.
\n\nNow we just wait for the curfews to lift so that we can get back to sailing!
\n\n\n", "date_published": "2020-03-27T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/curiosity-voronoi.jpg", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/cbase-35c3-flowhub/", "url": "https://bergie.iki.fi/blog/cbase-35c3-flowhub/", "title": "Building c-base @ 35C3 with Flowhub", "content_html": "The 35th Chaos Communication Congress is now over, and it is time to write about how we built the software side of the c-base assembly there.
\n\nThe Chaos Communication Congress is a major fixture of the European security and free software scene, with thousands of attendees. As always, the “mother of all hackerspaces” had a big presence there, with a custom booth that we spend nearly two weeks constructing.
\n\n\n\nThis year’s theme was “Refreshing Memories”, and accordingly we brought various elements of the history of the c-base space station to the event. On hardware side we had things like a scale model the c-base antenna, as well as vintage arcade machines and various artifacts from over the years.
\n\nWith software, we utilized the existing IoT infrastructure at c-base to control lights, sound, and drive videos and other information to a set of information displays. All of course powered by Flowhub.
\n\nThis was a quite full-stack development effort, involving microcontroller firmware programming, server-side NoFlo and MsgFlo development, and front-end infoscreen web design. We also did quite a bit of devopsing with Travis CI, Docker, and docker-compose.
\n\nThe first step in bringing c-base’s IoT setup was to prepare a “portable” version of the environment. An MQTT broker, MsgFlo, some components, and a graph with any on-premise c-base hardware or service dependencies removed. As this was for a CCC event, we decided to call it c3-flo (in comparison to the c-flo that we run at c-base).
\n\nWe already have a quite nice setup where our various systems get built and tested on Travis, and uploaded to Docker Hub’s cbase namespace. Some repositories weren’t yet integrated, and so the first step was to Dockerize them.
\n\nTo make the local setup simple to manage, we decided to go with a single docker-compose environment that would start all systems needed. This would be easy to run on any x86 machine, and provide us with a quite comprehensive set of features from the IoT parts to NASA’s Open MCT dashboard.
\n\nOf course we kept adding to the system throughout 35C3, but in the end the graph looked like the following:
\n\n\n\nTo make our setup more portable, we decided to bring a local instance of the “c-base botnet” WiFi used to Congress. This way all of our IoT devices could work at 35C3 with the exact same firmware and networking setup as they do at c-base.
\n\nNormally Congress doesn’t recommend running your own access point. But if needed, there are guidelines available on how to do it properly if needed. As it happens, out of this year’s 558 unofficial access points, the c-base one was the only one conforming to the guidelines (commentary around the 25 minute mark).
\n\n\n\nLike any station, c-base has a set of info screens showing various announcements, timelines, and statistics. These are built with Raspberry Pi 3s running Chrome in Kiosk Mode, with a single-page webapp that connects to our MsgFlo infrastructure over WebSockets with msgflo-browser.
\n\nEach screen has a customized rotation of different pages to show, and we can send URLs to announce events like members arriving to c-base or a space launch livestream via MQTT.
\n\n\n\nFor 35C3 we built a new set of pages tailed for the Congress experience:
\n\nHighlight of the whole assembly was a re-enactment of the c-base crash from billions of years ago. Triggered by a dropped bottle of space soda, this was an experience incorporating video, lights, and audio that we ran several times every day of the conference.
\n\n\n\nThe c-base crash animation was managed by a NoFlo graph integrated to the our MsgFlo setup with the standard noflo-runtime-msgflo tool. With this we could trigger the “crash” with a MQTT message (sent by a physical button), and run a timed sequence of actions on lights, a sound system, and our info screens.
\n\n\n\nThere were some new components that we had to build for this purpose. The most important was a Timeline component that was upstreamed as part of the noflo-tween animation library.
\n\n\n\nWith this you can define a multi-tracked timeline as JSON or YAML, with actions triggered on each track on their appropriate second. With MsgFlo this meant we could send timed commands to different devices and create a coordinated experience.
\n\nFor example, our animation started by showing a short video on all info screens. When the bottle fell in the video, we triggered the appropriate soundtrack, and switched the lights through various animation modes. After the video ended, we switched to a “countdown to crash” screen, and turned all lights to a red alert mode.
\n\nAfter the crash happened, everything went dark for a few seconds, before the c-base assembly was returned into its normal state.
\n\nAll LED strips we used at 35C3 were run using the McLighting firmware. By default it allows switching between different light modes with a simple WebSocket API.
\n\nFor our requirements, we wanted the capability to send new commands to the lights with minimal latency, and to be able to restore the lights to whatever mode they had before the crash started in the end.
\n\n\n\nThe component is available in noflo-mclighting. The only thing you need is running the NoFlo graph in the same network as the LED strips, and to send the WebSocket addresses of your LED strips to the component. After that you can control them with normal NoFlo packets.
\n\nThe whole setup took a couple of days to get right, especially regarding timings and tweaking the light modes. But, it was great! You can see a video of it below:
\n\n\n\nAnd if you’re interested in experimenting this stuff, check out the “portable c-base IoT setup” at https://github.com/c-base/c3-flo.
\n", "date_published": "2019-01-05T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/c-base-assembly-35c3.JPG", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/docker-developer-shell/", "url": "https://bergie.iki.fi/blog/docker-developer-shell/", "title": "Managing a developer shell with Docker", "content_html": "When I’m not in Flowhub-land, I’m used to developing software in a quite customized command line based development environment. Like for many, the cornerstones of this for me are vim and tmux.
\n\nAs customization increases, it becomes important to have a way to manage that and distribute it across the different computers. For years, I’ve used a dotfiles repository on GitHub together with GNU Stow for this.
\n\nHowever, this still means I have to install all the software and tools before I can have my environment up and running.
\n\nDocker is a tool for building and running software in a containerized fashion. Recently Tiago gave me the inspiration to use Docker not only for distributing production software, but also for actually running my development environment.
\n\nTaking ideas from his setup, I built upon my existing dotfiles and built a reusable developer shell container.
\n\nWith this, I only need Docker installed on a machine, and then I’m two commands away from having my normal development environment:
\n\n$ docker volume create workstation\n$ docker run -v ~/Projects:/projects -v workstation:/root -v ~/.ssh:/keys --name workstation --rm -it bergie/shell\n
Here’s how it looks in action:
\n\n\n\nOnce I update my Docker setup (for example to install or upgrade some tool), I can get the latest version on a machine with:
\n\n$ docker pull bergie/shell\n
At least in theory this should give me a fully identical working environment regardless of the host machine. Linux VPS, a MacBook, or a Windows machine should all be able to run this. And soon, this should also work out of the box on Chromebooks.
\n\nThe basics are pretty simple. I already had a repository for my dotfiles, so I only needed to write a Dockerfile to install and set up all my software.
\n\nTo make things even easier, I configured Travis so that every time I push some change to the dotfiles repository, it will create and publish a new container image.
\n\nSo far this setup seems to work pretty well. However, here are some ideas for further improvements:
\n\nIf you have ideas on how to best implement the above, please get in touch.
\n", "date_published": "2018-04-19T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/vim-developer-shell-docker.png", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/microflo-particulate-sensors/", "url": "https://bergie.iki.fi/blog/microflo-particulate-sensors/", "title": "MicroFlo and IoT: measuring air quality", "content_html": "Fine particulate matter is a serious issue in many cities around the world. In Europe, it is estimated to cause 400.000 premature deaths per year. European Union has published standards on the matter, and warned several countries that haven’t been able to reach the safe limits.
\n\n\n\n\nGermany saw the highest number of deaths attributable to all air pollution sources, at 80,767. It was followed by the United Kingdom (64,351) and France (63,798). These are also the most populated countries in Europe. (source: DW)
\n
The associated health issues don’t come cheap: 20 billion euros per year on health costs alone.
\n\n\n\n\n“To reduce this figure we need member states to comply with the emissions limits which they have agreed to,” Schinas said. “If this is not the case the Commission as guardian of the (founding EU) treaty will have to take appropriate action,” he added. (source: phys.org)
\n
One part of solving this issue is better data. Government-run measurement stations are quite sparse, and — in some countries — their published results can be unreliable. To solve this, Open Knowledge Foundation Germany started the luftdaten.info project to crowdsource air pollution data around the world.
\n\n\n\nLast saturday we hosted a luftdaten.info workshop at c-base, and used the opportunity to build and deploy some particulate matter sensors. While luftdaten.info has a great build guide and we used their parts list, we decided to go with a custom firmware built with MicroFlo and integrated with the existing IoT network at c-base.
\n\n\n\nMicroFlo is a flow-based programming runtime targeting microcontrollers. Just like NoFlo graphs run inside a browser or Node.js, the MicroFlo graphs run on an Arduino or other compatible device. The result of a MicroFlo build is a firmware that can be flashed on a microcontroller, and which can be live-programmed using tools like Flowhub.
\n\nESP8266 is an Arduino-compatible microcontroller with integrated WiFi chip. This means any sensors or actuators on the device can easily connect to other systems, like we do with lots of different sensors already at c-base.
\n\n\n\nMicroFlo recently added a feature where Wifi-enabled MicroFlo devices can automatically connect with a MQTT message queue and expose their in/outports as queues there. This makes MicroFlo on an ESP8266 a fully-qualified MsgFlo participant.
\n\nWe wanted to build a firmware that would periodically read both the DHT22 temperature and humidity sensor, and the SDS011 fine particulate sensor, even out the readings with a running median, and then send the values out at a specified interval. MicroFlo’s core library already provided most of the building blocks, but we had to write custom components for dealing with the sensor hardware.
\n\nThankfully Arduino libraries existed for both sensors, and this was just a matter of wrapping those to the MicroFlo component interface.
\n\nAfter the components were done, we could build the firmware as a Flowhub graph:
\n\n\n\nTo verify the build we enabled Travis CI where we build the firmware both against the MicroFlo Arduino and Linux targets. The Arduino one is there to verify that the build works with all the required libraries, and the Linux build we can use for test automation with fbp-spec.
\n\nTo flash the actual devices you need the Arduino IDE and Node.js. Then use MicroFlo to generate the .ino
file, and flash that to the device with the IDE. WiFi and MQTT settings can be tweaked in the secrets.h and config.h files.
The recommended weatherproofing solution for these sensors is quite straightforward: place the hardware in a piece of drainage pipe with the ends turned downwards.
\n\nSince we had two sensors, we decided to install one in the patio, and the other in the c-base main hall:
\n\n\n\nOnce the sensor devices had been flashed, they became available in our MsgFlo setup and could be connected with other systems:
\n\n\n\nIn our case, we wanted to do two things with the data:
\n\nThe first one was just a matter of adding couple of configuration lines to our OpenMCT server. For the latter, I built a simple Python component.
\n\nOur sensors have been tracking for a couple of days now. The public data can be seen in the madavi service:
\n\n\n\nWe’ve submitted our sensor for inclusion in the luftdaten.info database, and hopefully soon there will be another covered area in the Berlin air quality map:
\n\n\n\nIf you’d like to build your own air quality sensor, the instructions on luftdaten.info are pretty comperehensive. Get the parts from your local electronics store or AliExpress, connect them together, flash the firmware, and be part of the public effort to track and improve air quality!
\n\nOur MicroFlo firmware is a great alternative if you want to do further analysis of the data yourself, or simply want to get the data on MQTT.
\n", "date_published": "2018-02-26T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/luftdaten-sensor-berlin.png", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/ascomponent/", "url": "https://bergie.iki.fi/blog/ascomponent/", "title": "asComponent: turn any JavaScript function into a NoFlo component", "content_html": "Version 1.1 of NoFlo shipped this week with a new convenient way to write components. With the noflo.asComponent
helper you can turn any JavaScript function into a well-behaved NoFlo component with minimal boilerplate.
Usage of noflo.asComponent
is quite simple:
const noflo = require('noflo');\nexports.getComponent = () => noflo.asComponent(Math.random);\n
In this case we have a function that doesn’t take arguments. We detect this, and produce a component with a single “bang” port for invoking the function:
\n\n\n\nYou can also amend the component with helpful information like a textual description and and icon:
\n\nconst noflo = require('noflo');\nexports.getComponent = () => noflo.asComponent(Math.random, {\n description: 'Generate a random number',\n icon: 'random',\n});\n
The example above was with a function that does not take any arguments. With functions that accept arguments, each of them becomes an input port.
\n\nconst noflo = require('noflo');\n\nfunction findItemsWithId(items, id) {\n return items.filter((item) => item.id === id);\n}\n\nexports.getComponent = () => noflo.asComponent(findItemsWithId);\n
The function will be called when both input ports have a packet available.
\n\nThe asComponent
helper handles three types of functions:
out
. Thrown errors get sent to error
out
, rejected promises to error
err
argument to callback gets sent to error
, result gets sent to out
With this, it is quite easy to write wrappers for asynchronous operations. For example, to call an external REST API with the Fetch API:
\n\nconst noflo = require('noflo');\n\nfunction getFlowhubStats() {\n return fetch('https://api.flowhub.io/stats')\n .then((result) => result.json());\n}\n\nexports.getComponent = () => noflo.asComponent(getFlowhubStats);\n
How that you have this component, it is quick to do a graph utilizing it (open in Flowhub):
\n\n\n\nHere we get the BODY element of the browser runtime. When that has been loaded, we trigger the fetch component above. If the request succeeds, we process it through a string template to write a quick report to the page. If it fails, we grab the error message and write that.
\n\nThe default location for a NoFlo component is components/ComponentName.js
inside your project folder. Add your new components to this folder, and NoFlo will be able to run them.
If you’re using Flowhub, you can also write the components in the integrated code editor, and they will be sent to the runtime.
\n\nWe’ve already updated the hosted NoFlo browser runtime to 1.1, so you can get started with this new component API right away.
\n\nIn many ways, asComponent is the inverse of the asCallback embedding feature we introduced a year ago: asComponent
turns a regular JavaScript function into a NoFlo component; asCallback
turns a NoFlo component (or graph) into a regular JavaScript function.
If you need to work with more complex firing patterns, like combining streams or having control ports, you can of course still write regular Process API components.
\n\nThe regular component API is quite a bit more verbose, but at the same time gives you full access to NoFlo APIs for dealing with manually controlled preconditions, state management, and creating generators.
\n\nHowever, thinking about the hundreds of NoFlo components out there, most of them could be written much more simply with asComponent
. This will hopefully make the process of developing NoFlo programs a lot more straightforward.
Read more NoFlo component documentation and asComponent API docs.
\n", "date_published": "2018-02-23T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/ascomponent-fetch-graph.png", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/big-iot/", "url": "https://bergie.iki.fi/blog/big-iot/", "title": "Publish your data on the BIG IoT marketplace", "content_html": "When building IoT systems, it is often useful to have access to data from the outside world to amend the information your sensors give you. For example, indoor temperature and energy usage measurements will be a lot more useful if there is information on the outside weather to correlate with.
\n\nThanks to the open data movement, there are many data sets available. However, many of these are hard to discover or available in obscure formats.
\n\nBIG IoT is an EU-funded research project to make datasets easier to share and discover between organizations. With it there is a common semantic standard for how datasets are served, and a centralized marketplace for discovering and subscribing to data offerings.
\n\nAs an example, if you’re building a car navigation application, you can use BIG IoT to get access to multiple providers of routing services, traffic delay information, or parking spots. If a dataset comes online in a new city, it’ll automatically work with your application. No need for contract negotiations, just a query to find matching providers on-demand.
\n\nLast summer Flowhub was one of the companies accepted into the BIG IoT first open call. In it, we received some funding to make it possible to publish data from Flowhub and NoFlo on the marketplace. In this video I’m talking about the project:
\n\n\n\nIn the project we built three things:
\n\nWhile it is easy enough to use the BIG IoT Java library to publish datasets, the Flowhub integration we built it makes it even easier. You need your data source available on a message queue, a web API, or maybe a timeseries database. And then you need NoFlo and the flowhub-bigiot-bridge library.
\n\nThe basic building block is the Provider component. This creates a Node.js application server to serve your datasets, and registers them to the BIG IoT marketplace.
\n\n\n\nWhat you need to do is to describe your data offering. For this, you can use the CreateOffering component. You can use IIPs to categorize the data, and then a set of CreateDatatype components to describe the input and output structure your offering uses.
\n\n\n\nFinally, the request
and response
ports of the Provider need to be hooked to your data source. The request outport will send packets with whatever input data your subscribers provided, and you need to send the resulting output data to the response port.
For real-world deployment, the Flowhub BIG IoT bridge repository also includes examples on how to test your offerings, and how to build and deploy them with Docker.
\n\nHere’s how a full setup with two different parking datasets looks like:
\n\n\n\nIf you’re participating in the Bosch Connected World hackathon in Berlin next week, we’ll be there with the BIG IoT team to help projects to utilize the BIG IoT datasets.
\n\nThis project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 688038.
\n", "date_published": "2018-02-12T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/noflo-bigiot-parking-provider.png", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } }, { "id": "https://bergie.iki.fi/blog/blog-2017-edition/", "url": "https://bergie.iki.fi/blog/blog-2017-edition/", "title": "My blog, the 2017 edition", "content_html": "I guess every five years is a good cadence for blog redesigns. This year’s edition started as a rewrite of the technical implementation, but I ended up also updating the visuals. Here I’ll go through the design goals, and how I met them.
\n\nThis year the web has been strongly turning towards encryption. While my site doesn’t contain any interactive elements, using HTTPS still makes it harder for malicious parties to track and modify the contents people read.
\n\nFor the past five years, my blog has been hosted on GitHub Pages. While otherwise that has been a pretty robust solution, they sadly don’t support SSL for custom domains. A common workaround would be to utilize Cloudflare as a HTTPS proxy, but that only works if you let them manage your domain. Since bergie.iki.fi
is a subdomain, that was off the cards.
Instead, what I did was turn towards Amazon Web Services. I used Amazon Certificate Manager with my iki subdomain to get an SSL certificate, and utilized Travis CI to build the Jekyll site and upload to S3.
\n\nFrom there the site updates are served using Amazon CloudFront CDN, routed using Route53.
\n\nWith this, I only need to push new changes to this site’s GitHub repository, and robots will take care of the rest, from producing the HTML pages to distributing them via a global content delivery network.
\n\nAnd, I get the friendly green lock icon.
\n\n\n\nI moved the site from Midgard CMS to the Jekyll static site generator in 2012. At that point, images were stored in the same GitHub repository alongside the textual contents.
\n\nHowever, the sheer volume of pictures accumulated on this site over the years made the repository quite unwieldy, and so I moved them to Amazon S3 couple of years ago.
\n\nThis made working with different sizes of images a bit more unwieldy, as I’d have to produce the different variants locally and upload them separately.
\n\nNow, with the new redesign I built an Amazon Lambda function to resize images on-demand. My solution is implemented in NoFlo, roughly following the ideas from this tutorial but utilizing the excellent noflo-sharp library.
\n\nThis is a topic I should write about in more detail, but it turns out NoFlo works really well with Amazon Lambda. You can use any Node.js NoFlo graph there by simply wrapping it using the asCallback embedding API.
\n\nThe end result is that I only need to upload original size images to S3 using some tool (NoFlo, s3cmd, AWS console, or the nice DropShare app), and I can get different sizes by tweaking the URL.
\n\nI could have gone with ImgFlo, but right now I need only rescaling, and running the whole GIMP engine felt like an overkill.
\n\nAfter the technical side of the blog revamp was done, I turned towards the design aspects. I wanted more color, and also to benefit from the features of the modern web. This meant that performance-hindering things like Bootstrap, jQuery, and Google Fonts were out, since nowadays you can do pretty nice sites with pure CSS alone.
\n\nIn addition to the better CDN setup, the redesign improved the site’s PageSpeed score. And I think it looks pretty good.
\n\nHere’s the front page:
\n\n\n\nFor reference, here is how the 2012 edition looked like:
\n\n\n\nI also spent a bit of time to make sure the site looks nice on both smartphones and tablets, since those are the devices most people use to browse the web these days.
\n\nHere is how the site looks like on different devices, courtesy of Am I Responsive
\n\n\n\n\n\nThis site has over 1000 articles, and it is easy to lost in those volumes. To make it easier to discover content, I implemented a related posts feature.
\n\nI originally wanted to use Jekyll’s Latent Semantic Indexing feature, but with this amount of content that simply blows up.
\n\nInstead, I ended up building my own hacky implementation based on categorization and similar keywords in posts using Liquid templates. This makes full site builds a bit slow, but the results seem quite good:
\n\n\n\nWhile most people probably discover content now via Twitter or Facebook (both of which I occasionally share my things in, in addition to places like Reddit or Hacker News as needed), RSS is still the underpinning of receiving blog updates.
\n\nFor this, the site is available as both:
\n\n\n\nFeel free to add one of them to the news aggregator of your choice!
\n\nI also supply /now page for current activities, inspired by the NowNowNow movement. Here is how Derek Sivers described the idea:
\n\n\n\n\nPeople often ask me what I’m doing now.
\n
\n\n\nEach time I would type out a reply, describing where I’m at, what I’m focused on, and what I’m not.
\n
\n\n\nSo earlier this year I added a /now page to my site: https://sivers.org/now
\n
\n\n\nA simple link. Easy to remember. Easy to type.
\n
\n\n\nIt’s a nice reminder for myself, when I’m feeling unfocused. A public declaration of priorities.
\n
I’ve been running this site since 1997. Here is what I’ve written about some of the previous redesigns:
\n\nI hope you enjoy the new design! Let me know what you think.
\n\n", "date_published": "2017-12-14T00:00:00+00:00", "image": "https://d2vqpl3tx84ay5.cloudfront.net/4JPl6gVy4jLmTF7iN.jpg", "author": { "name": "Henri Bergius", "url": "https://bergie.iki.fi/about" } } ] }