Can QML become the next standard for web UI?

Recently, I came across a few articles [1, 2, 3] comparing Qt’s modern UI framework – Qt Quick and its own declarative language QML to HTML, with QML coming out as a clear winner in several categories:

  • speed of learning,
  • ease of use,
  • performance,
  • cross-platform compatibility.

Although QML is not a substitute for HTML – they were designed with different goals in mind – I think QML would make a great web technology.

No matter which client framework is currently in fashion, building an UI on top of HTML is using the wrong abstraction for the wrong purpose. HTML was originally developed as hyper-text markup language. Its primary function was to semantically structure and link text documents. Presentation was largely left to interpretation of the user agent (browser). Over time, more presentation control mechanisms were added. Today’s HTML, CSS, DOM and JS scripting is a weird mix of data markup, presentation and runtime environment. With every browser implementing its own subset, in its own way, with its own quirks, the requirement to create documents that look and behave consistently across browsers and platforms quickly becomes untenable.

With HTML 5, visual consistency could be achieved more easily: instead of manipulating the DOM, we can write our own presentation code using the canvas element and its imperative graphics API. We can go even more low-level with WebGL. In this case, HTML is not really in use anymore – it serves only as a container in which the canvas element is embedded. Whether we write our own rendering code or use a canvas based library, it’s hard to get it right. More often than not, canvas-heavy websites generate excessively high CPU load.

In contrast, QML was designed from the ground up for modern, fluid, data-driven UIs. At its core is a declarative component-based language with dynamic data bindings and powerful animation and state management system. A QML document describes a tree of visual (and non-visual) elements that form a scene. With data bindings being a core concept, the separation of data from presentation is trivial and encouraged. The scene is controlled by any combination of the following mechanisms:

  • a data model hooked into the scene using data bindings,
  • reactions to events from user input, sensors, location and other APIs,
  • JavaScript code for imperative programming.

A QML document defines a scene completely and precisely. You can think of it as pure presentation. Nothing is left to interpretation for the runtime. It will look and behave identically on all platforms and devices. Due to the declarative nature of the language, it is also easy to visualize what the scene will look like by reading the source code.

Under the hood QML uses a scenegraph engine implemented on top of whatever low level graphics API is available on the platform: currently Open GL and Open GL ES on embedded platforms, with Vulkan and D3D12 backends in the works. The engine uses modern programmable graphics hardware and is heavily optimized and CPU efficient.

It is obvious that QML and Qt Quick would be a great fit for the web. I wish browsers already supported it as standard. The big question is: what is the chance of big browsers implementing QML? Unlike HTML, which is an open standard, QML (although open-source) is a proprietary technology owned and developed by the The Qt Company. It would probably have to be developed into an open standard and in partnership with major browser vendors. I don’t know if this is going to happen anytime soon, or ever. Realizing they are missing out on the biggest platform – the world wide web – The Qt Company might want to invest in this direction. It would be a win for everybody and most certainly web developers.



Native Windows build jobs on Jenkins

This may not be the most frequent use case, but Jenkins CI server is perfectly capable of running native C/C++ build jobs on Windows. That is, build jobs that use the native platform’s tools, i.e. Visual Studio or possibly other C/C++ compiler suite.

From the user’s perspective, building is a straightforward activity:

  • Launch Visual Studio Command Prompt, aka. vcvarsall.bat.
  • Navigate to your source and invoke MSBuild on the solution,
  • or nmake if you are hardcore and use Makefiles.
    • If you are using a third-party build system such as CMake or Qt’s qmake, you first run that to generate the Makefile.

This translates pretty well into a Freestyle Jenkins build job. You could put all the above mentioned steps into a single Windows batch command build step. But you may prefer one of Jenkins’ plugins for the build system of choice, as these provide a nicer interface than a plain batch file and sometimes more options and allow crazy build scenarios*.

The trouble with Jenkins build plugins is they don’t provide a way to setup the environment for the native build tools, i.e. they don’t call vcvarsall.bat. Now you cannot just add a pre-build step and call vcvarsall.bat in it. That would only setup the environment inside the pre-build step. As each build step starts with a fresh environment, the main build step will be unaffected by it. One option is to run vcvarsall.bat for the logged-in user and also run Jenkins under this user. But that would be severely limiting. What if you want to run one 32-bit build job and another 64-bit job? Also this approach will not work if you run Jenkins as a Service.

Fortunately there is a simple way to apply the effect of vcvarsall.bat over the whole build job. After all, vcvarsall.bat only sets some environment variables – a whole lot of them. This neat little trick uses the EnvInject Jenkins Plugin to record the env variables set by vcvarsall.bat (and possibly other environment setup scripts you may use) and apply them to the whole build job.

  • Check the “Prepare an environment for the run” option.
  • In the “Script content” enter something like this:
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" x86
set > C:\jenkins\workspace\
  • The last line “set > …” saves all the environment variables set by the previous scripts to the given file.
  • Enter this file name into the “Properties File Path” field.

And voila, the build environment is set for the entire build job.

*) In my specific case, which I admit is a little bit crazy I use Maven as the topmost build tool. Maven provides metadata such as the product version. Maven further invokes Ant’s build.xml, which describes the build steps in concrete terms. Ant invokes qmake, then nmake and optionally other tools such as Doxygen and finally zips the output to a neat self-contained package. The resulting zip package is attached as Maven artifact and deployed to a company Nexus server. Additionally unit tests are run (also as a Maven build phase) and test results are published and visualized by Jenkins. This build workflow has been adapted from the Java world and works surprisingly well in the C++ world too.



OneDrive for Business as Unsynced Storage or Backup

If you happen to have an Office 365 subscription you may have noticed that it comes with a 1 TB (1024 GB) OneDrive for Business cloud storage. If you are thinking that this may be a nice place to offload larger files or backups then read on…

There are several ways OneDrive can be used. The most obvious and convenient way is to copy files into the OneDrive folder on your machine and let them sync to the cloud in the background. However this does not serve our purpose very well because a local copy stays on the local machine. Furthermore, if we have linked other machines with the same account these files will be duplicated on those machines as well. What we actually want to do in this article, is to upload files to the cloud and remove them from local machine.

Notice the two OneDrive folders. The first one is the default that comes with the Microsoft Account (if you use one to log into your Windows). The second one is installed with the Office 365 suite and used with your Office 365 for Business organization account. These two accounts are not connected in any way and lead to completely separate and disconnected universes that have nothing in common. Well, except the common creator and provider of both services. Let me put it in another way: Microsoft Account is the key to all things Windows including Windows Store apps. Office 365 account is the key to all things Office. The sooner you realize the distinction, the more head scratching it will save you.

OneDrive and OneDrive for Business.

Alternatively, you can upload individual files of up to 2 GB in size using the web interface. However, changes done using the web interface are also automatically synced to all linked machines. On Dropbox (or, for that matter, also the consumer flavor of OneDrive) you can select individual subdirectories that will or will not be synced. With OneDrive for Business either the whole OneDrive directory will sync or nothing. Until Microsoft improves the OneDrive for Business client, I will show how to create an unsynchronized storage space on OneDrive for Business.

What is a Document Library

The Business flavor of OneDrive is built on top of a technology called SharePoint. As an Office 365 Business subscriber you also have an access to something called a SharePoint Site. On your Site, OneDrive content is stored in what is called a Document Library. The underlying technology is so complex, you would need a certified MS consultant to explain it to you. But the point is, we can have multiple Document Libraries and each library can be synced separately. Your default Document Library called “Documents” is created for you and this is the default OneDrive location that will be synced when you set up OneDrive for Business on your machine. To manage your libraries, sign in to your Office 365 Portal, navigate to OneDrive and chose Site contents under the “gear” icon in the toolbar.


Adding a new Document Library is easy – click on add an app, select Document Library, enter a name for the new library (I called mine “BigData”) and click Create. Once you closed your browser, navigating back to your new OneDrive storage is a bit tricky. Clicking on OneDrive button in the main menu will take you to your default “Documents” library. You’ll have to go through that “gear” icon -> Site content -> Your Library. The new library will not be synced to any machines. In the next section I describe various ways how to use this new library.

Option 1 – Browser

Once you are logged in to your Office 365 Portal, the most straightforward way to use your new Document Library is directly through the browser. Navigate to your library as described above, upload / download / manage the content directly from the browser.

This has the advantage, that once you upload a file, you can delete it from your machine and the file stays in the cloud. Uploading large files through the browser is  not very convenient. The upload breaks if you close the browser or your connection is interrupted.

Option 2 – Sync the Library

You can sync the new library to a directory on your machine, just like the default library of OneDrive. You’ll need the URL of the new library: navigate to your library, in the LIBRARY ribbon toolbar select Library Settings, copy the URL presented at the top of the page (not the URL in the browser address bar). Now right click on the blue OneDrive for Business system tray icon and select Sync a new library. Paste the URL in the dialog, press Sync Now. A new cloud-synchronized directory will be created on your hard drive.

This is the classic sync scenario. Everything copied to the synchronized directory is duplicated in the cloud and everything in the cloud is copied back to this directory. This way, files from the cloud are also available offline.

Option 3 – Map Network Drive

The new document library can also be mapped as a network drive. Good step-by-step guide is here.

This way, your files stored in the cloud will be accessible through the file system (the mapped drive letter) but without duplicating them on your local hard drive. Files on the cloud are only available as long as you are online.

However, I found that copying new files to the mapped drive temporarily makes a copy of those files on the local hard drive. The copy remains there until the files are finished uploading to the cloud.

Option 4 – Specialized 3rd Party Cloud Storage Client

3rd party cloud storage client software (e.g. CloudBerry) can provide many more features such as: end-to-end encryption, resumable upload and download, multiple cloud storage backends and overcoming various limitations of those backends.

Unfortunately I am not aware of any solid 3rd party client software for the SharePoint-based OneDrive for Business. I would be grateful for any suggestions.


OneDrive for Business has a sharing feature which works best if you are going to share with other Office 365 users. Anonymous public sharing using a web link is limited to individual files, i.e. for each file you want to share you have to obtain a separate share link. You can’t send someone a public link to a whole directory.

C++ Member Initializer List Fallacy a.k.a. Wreorder

There is no question that C++ is an extremely complex programming language with a lot of traps that can catch even the most hardened coders by surprise.

One of my favorite language features that can bring a headache is the constructor member initializer list. Personally, I try not to use this feature unless necessary. But if you need to initialize a member of a reference type or a class type using a non-default constructor, this is your only choice. And that is when you open your code to a category of easy to introduce – hard to spot bugs.

Have you ever changed the order class members during routine refactoring? Touched only the header file and didn’t bother to look into the .cpp? Then you probably didn’t know about this.

Consider the following demonstration:

[gist /]

Looks perfectly normal, right? Especially if the class declaration and constructor implementation were separated into a header file and a .cpp source file. The order in which you write member initializers is completely irrelevant, only the member declaration order matters.

Now that is pretty logical and consistent. Looking at a class definition, one would expect that members are initialized in the order in which they are declared. But one tends to forget this rule when looking at a constructor implementation and the initializer list. Especially if this is in another file. Also consider destruction: members are destructed in the reverse order of their construction. If two constructors of would construct the class members in two different orders, in which order should the destructor destroy them?

In the example code above, the comp.a variable is initialized using an undefined value, which makes it effectively an uninitialized variable.

I don’t understand why C++ allows to write member initializers in different order, especially if it leads to such nasty bugs. In my opinion, this should be a hard compile error. Even worse, compilers are completely silent about it. At least by default.

Clang and g++ will only throw a warning when run with the -Wreorder or -Wall option.

The Visual C++ compiler (cl) is completely silent. It does not notice this bug even when using the highest warning level /W4, /Wall, not even using the venerated /analyze option!


Do not rely on your favourite compiler. Try to push your code through as many compilers as you can. Strive to get your code free of warnings on the highest warning levels on all of them. Add a static code analyzer to your arsenal. Preferably not just one, but as many as you can get your hands on. C++ developers who are locked to a single platform are at an inherent disadvantage because they might get deprived of quality tools that are available on other platforms.

Speed up C++ build times by means of parallel compilation

Everyone who has worked on a fair-sized C/C++ project surely knows these scenarios: sometimes it’s unavoidable to change or introduce a new #define or declaration that is used nearly everywhere. When you hit the ‘Build’ button next time, you’ll end up recompiling nearly the whole thing. Or you just came to the office, updated your working copy and want to start the day with a clean build.

The complexity of the C++ language in combination with the preprocessor makes compilation orders of magnitude slower, compared to modern languages such as C#. Precompiled headers help here a bit, but it’s not a solution to the problem inherent to the language itself, only a mere optimization. There are coding practices that help a lot, not only in making robust and maintainable software, but also helping to improve build times. They go along the way of “minimize dependencies between modules” or “#include only what you use directly”. There are also tools that visualize #include trees and help you identify hot-spots. These are all clever tricks, which I may discuss later. However, this article is about raw, brute force :) You just got a new, powerful, N-core workstation? Well, let’s get those cores busy…

C++ translation units (.cpp files) are independent during the compilation phase and indeed are compiled in isolation. Therefore, the speed of compilation scales almost linearly with the number of processors. Most IDEs and build tools nowadays come with an option to enable parallel compilation. However, this option is almost never enabled by default. I will show you how to enable parallel compilation in build systems with which I have some experience:

  • Makefiles (Linux and Windows)
  • Qt’s Qtcreator IDE (Linux and Windows)
  • MS Visual Studio, MSBuild

Makefiles – gnu make

Telling the make program to compile in parallel could not be simpler. Just specify the -j N (or –jobs=N) option when calling make, where N is the number of things you want make to run in parallel. Good choice is to use the number of CPU cores as N. Warning: if you use -j but do not specify N, make will spawn as many parallel jobs as there are targets in the Makefile. This is neither efficient nor desirable.

Makefiles on Windows – nmake, jom

On Windows, Visual Studio comes with its own version of the make program called nmake.exe. Nmake does not know the -j option and can’t do parallel jobs. Luckily, thanks to the Qt community, there is an open source program called “jom”, which is compatible with nmake and adds the -j option. You can download the source and binary from here: Installation is very simple, just extract the .zip file anywhere, optionally add it to %PATH%. Use it like you would use nmake.

Qt’s Qtcreator

First, let me say that Qtcreator is a very promising cross-platform IDE for (not only Qt) C++ projects, completely free and open source. Not surprisingly, Qtcreator uses the Qt’s qmake build tool first to generate a Makefile from a project description .pro file. Then it simply runs make on the generated Makefile. Qtcreator allows you to pass additional arguments to the build commands: Projects -> Build Settings -> Build Steps -> Make -> Make arguments: here you can specify the -j N option.

Project build settings in Qtcreator on Linux.

Qtcreator on Windows

If you use Qtcreator on Windows, the story is almost the same with only minor differences. On Windows platforms Qtcreator uses the MinGW32 build toolchain. Unfortunately due to the way (bug) MinGW’s make works on Windows and the way Qt’s qmake generates Makefiles, the -j option doesn’t work. The reason why and various workarounds are described in this discussion. One easy way is to override the mingw32-make.exe and use jom.exe instead.

Project build settings in Qtcreator on Windows.

MS Visual Studio, MSBuild

Not surprisingly, the Visual Studio/C++ IDE uses a completely different build system than the GNU toolchain, called MSBuild (formerly VCBuild). If you only work within the IDE and do not wander into the command line world very often, you probably haven’t even bumped into this tool. Yet it is invoked behind the scenes whenever you press the build button. In short, the process is as follows: Visual Studio keeps the list of project source files, compiler and linker options in a .vc(x)proj file. At the start of each build, the MSBuild tool then crunches the .vcxproj file and outputs a list of commands for invoking the compiler, the linker and any other tools involved in the build process.

The MS Visual C++ compiler (cl) can compile multiple source files in parallel, if you tell it to using the /MP switch. It will then spawn as many parallel processes as there are installed CPU cores in the system. You can set this option conveniently from the IDE: Project -> Properties -> Configuration Properties -> C/C++ -> General -> Multi-processor Compilation: Yes (/MP). This option will be saved into the .vcxproj file, so multi-process compilation will be used regardless if you build in the IDE or from the command line.

Enable parallel compilation for a MSVC project.

Multiple simultaneous builds

In Visual Studio, you can go even a little further and tell the IDE to build multiple projects in parallel. To enable this, go to: Tools -> Options -> Projects and Solutions -> Build and Run: and set the maximum number of parallel project builds. When building a solution from the command line, pass this option to MSBuild: /maxcpucount[:n]. This can be useful, if your solution consists of many small, independent projects. If your solution contains just a single or a couple of big projects, you’ll probably do best with the /MP option only.

Setting maximum number of parallel builds.

In closing

Modern machines come with a lot of horsepower, the trend is that the number of CPU cores will be increasing. Why not leverage this and turn your workspace builds from a lunch break into “only” a coffee break? Parallel compilation speeds up the build process almost linearly with the number of CPU cores.

However compilation is only one part of the story. Then there’s linking. It is not uncommon that a project, which compiles in seconds, takes minutes to link. I will point you to some articles on how to speed up linking in my next post.

Ditching Facebook, moving to Twitter

A month ago, I had an interesting talk with my friend who refused to join Facebook or any other “social” media for that matter. In fact, I have a couple of friends outside any virtual friendship circles, but they are certainly a tiny minority. Until then, I was a devoted fan of FB (previously Orkut, Unister, Tagged and maybe some more), having a fairly large network of “friends”, being tagged on many photos, checking other people’s status many times every day. Perhaps, you could even say, I suffered a slight addiction.

My friend convinced me, that with FB I get no real value added to my life. On the contrary, I got:

  • a network of people (contacts), I don’t really care about,
  • my time consumed by constantly reading and watching pointless content posted by all those people,
  • irrational fear of losing touch with people, with whom I would not want to get in touch anyway,
  • fear of posting really interesting personal stuff, because the whole thing got too public.

I came to the resolution to close my FB account. Finally, it was the ‘waste of time for nothing in return’ argument that convinced me. After all, I consider myself somewhat a follower of zen principles, not in the religious sense, but more in the ‘gain by losing’ sense. As a result, I got a lot less distracted over the day and a lot more focused on important things.

I don’t want to only bash FB and such media. There is a great and proven marketing and promotional potential in there. I can also see the appeal to teenagers. But for a grown up guy who is not selling anything, there is little to none to gain. However, as a starting freelance IT pro I felt that I should maintain my online visibility. I’ve had my LinkedIn profile for a while, later added a blog and a personal web/portfolio page. The blog (even with my sparse posting habits) proved to be useful for starting a conversation about some of my side projects. Most recently, I opened a Twitter account. So far, I have found this combination of internet tools powerful, yet not obtrusive, only serving me when I have something to say.

So what shall I be tweeting about? I guess, like most of the people, semi-random short thoughts. I ike to share them when I learn something interesting and eye-opening, mainly from the area of my expertise: software engineering, coding practices, C++, graphics programming and lately the Qt framework. I will also tweet about project updates, like the QOF for Visual Studio and possibly others. I really like the 140 character limit, because it makes you think twice before you post.

What do you think; does FB (or Google+ or any other) give you something that is worth the while?

Quick Open File, now available for VS 2010 Beta 2

Those of you who embarked on the Visual Studio 2010 Beta 2 train, surely miss my Quick Open File plugin :) Well, good news for you: here it is.

It was not as straightforward to port it over to VS 2010 as I first thought it would be. The new VS IDE is now WPF based, but my plugin is Forms-based. The experience could be best described as a half-day trial and error, struggling to implement undocumented interfaces. Well, I guess that’s part of the beta experience…

Anyway, here it is and I hope you’ll like it. You can get the plugin at Visual Studio Gallery or at my site. Or better yet, open Visual Studio, go to Extension Manager, click at Online Gallery and type “Quick Open File” to the search box. This way you can install it directly from the IDE.

Quick Open File for Visual Studio – minor update

I found out that people come to my homepage mainly to download the Quick Open File for Visual Studio 2008 plugin. In fact there have been over 700 downloads since April-2009, when I first released it. This makes me quite happy because I’ve finally created something people find useful :)

As the name suggests, it’s a little utility for Visual Studio 2008 that allows you to find and open any file anywhere in the solution, no matter how deeply buried in the project structure. You just press Ctrl+K, Ctrl+O (of course, you can customize the shortcut key), type a few letters from the file name and hit Enter. And voila, your file is on the screen.

Quick Open File plugin window.

Today I released version 1.1 which adds the option to open the selected file in any other associated editor. The behavior is as follows:

  • Pressing Enter will open the selected file in the default editor Visual Studio has associated with the file type.
  • Pressing Shift+Enter will open the “Open With” dialog first where you can select in which editor to open the file.

You can find the new version of the plugin at Visual Studio Gallery, or directly at my site.

Finally on the web

Hi there!

As some of you may know, I try to pursue a freelance career for a change. So I decided to build myself a website where I can present my work in a convenient way. I’m happy that I finally managed to put it up into a state which I am not completely ashamed of :) I tried to summarize my project portfolio which consits of some of the more interesting school projects and other private software projects. The most interesting things (my diploma thesis, my work-in-progress 3D engine) are yet to come…

You can also find a picture gallery there where I mainly put my nature photos. I’ve used many of those photos as desktop backgrounds. Maybe some of them will inspire you too :)