NetBSD support for Intel kernel Mode Setting
A few versions ago, Intel started releasing drivers for their graphics cards which would rely on the kernel to switch between graphics modes (a new development from the Linux/FLOS world). This would, for example, ease the transition between framebuffer consoles and X11.
Since however the NetBSD kernel doesn't yet (as of version 6.0) support setting graphics modes, this means that later Intel drivers don't work under current NetBSD releases.
And there our hero, Grégoire Sutre, comes to play. He started a project on GitHub to port DRM/GEM to NetBSD from OpenBSD, which had a paid person implementing it for them under a BSD license.
Testing it on the stable release
A while ago, I replaced my old Thinkpad T61 with a T520. Unfortunately, this meant I also had to switch from NetBSD to Debian GNU/Linux because NetBSD wouldn't run on the T520 and I didn't have the time to change that. Also, at that time, neither of the BSDs supported kernel mode setting.
A few days ago, prompted by the announcement of the CONFIG_VT deprecation under Linux, I decided to make another attempt at getting NetBSD to run on the T520 and stumbled across Grégoires work. It was a bit awkward to take his changes and to apply them to NetBSD 6.0, because they were made for NetBSD-current and had to be modified first. Nonetheless, I managed to apply them relatively sensibly.
The rest doesn't take a lot of time to lay out. I built a release with the changes and applied it to my NetBSD system. There, everything worked. I had Intel graphics.
So I decided to upload a patch to my FTP server. In addition to that, I also built the release sets and an installation ISO image and uploaded them. You can find everything on ftp.bsdprojects.net under NetBSD-6.0-drmgem-20130203.
How to apply the patch to NetBSD-6
The distribution method Grégoire chose was a bit awkward to use (as described above), so I decided to create two patches (one for src, one for xsrc) and distribute those instead for NetBSD-6. The rest of the procedure is pretty straightforward. First, fetch the patches:
% ftp http://ftp.bsdprojects.net/pub/bsdprojects/NetBSD/NetBSD-6.0-drmgem-20130203/netbsd6-drmgem-src-20130203.diff.gz
% ftp http://ftp.bsdprojects.net/pub/bsdprojects/NetBSD/NetBSD-6.0-drmgem-20130203/netbsd6-drmgem-xsrc-20130203.diff.gz
Then, get the source code from CVS.
% cvs -d firstname.lastname@example.org:/cvsroot co -P -rnetbsd-6 src
% cvs -d email@example.com:/cvsroot co -P -rnetbsd-6 xsrc
Apply the patches:
% (cd xsrc && zcat ../netbsd6-drmgem-xsrc-20130203.diff.gz | patch -p0)
% cd src && zcat ../netbsd6-drmgem-src-20130203.diff.gz | patch -p0
… and build the tools and release:
% ./build.sh -j 9 -x tools
% ./build.sh -j 9 -x release
% ./build.sh -j 9 -x install=/
Then ensure that all your pkgsrc packages are linked against the X.Org release installed from the base. Things might work ok if you link the clients against X.Org from pkgsrc, but the pkgsrc X server certainly won't work. And if you already use X.Org from base anyway, why not use it for everything.
Right now the drmgem code is in a state where Intel kernel mode setting works. However, all the DRM modules have been deactivated in the X.Org source. This is because they don't seem to work yet. So the patch cannot yet be committed as it is, because for everybody who's not using Intel, it would be a step back.
So this means that the patch needs some brushing up. But it's good, solid work and will hopefully be ready to be committed into the source base some day soon.
What's left to say: Thank you, Grégoire Sutre, for your good work! If we can help you somehow, please let us know.
The Apple Experiment: Conclusions
At this point I've used the iPhone continuously as a main phone for a month in a row. I've made serious attempts to replicate all workflows I used on my Android phone, with varying results.
Holding it Wrong
The first thing you'll notice is that data transfers appear to be really slow over GSM most of the time. It's ok for reading Twitter using the app, but if you open a web site it can take a number of minutes before you finally have at least the text to read in front of you. Under the same conditions, the Android phone could load the web site in a matter of seconds (still slow, but it's mobile, so well). Using the same carrier, of course.
There's the old joke that people are simply holding the iPhone wrong. I think it was Steve Jobs who came up with this joke when he was still alive. Either way, I tried various ways of holding the phone, including upside down, and nothing would improve the page loading speed.
To add to the pain, the iPhone interpretes touchscreen presses which arrive while the screen is darkened (to announce impending screen lock). So if you tap somewhere on the screen to keep it awake while loading the page, it suddenly follows some not-yet-displayed link and you'll never see the page you wanted to go to.
An additional annoyance is the switch to turn sounds on and off. It is generally a good idea, however, you will always end up switching sounds on and off like mad with your pocket.
Which brings us to the general point that the iPhone hardware is incredibly fragile. Android devices appear relatively sturdy with their gorilla glass. If you drop them by accident, they aren't usually damaged. If you drop an iPhone on the floor, the glass will typically be shattered, and worse effects may occur.
Multi-singletasking in Mind
One of the biggest points you will notice quite quickly is that there is an enormous lack of integration of the different apps. Imagine for example that you want to share a link to something. You have Twitter, Google+, Delicious, Soup (well ok, they don't have an iPhone client), mail, chat, etc. on the phone. However, there is no common sharing dialog like in Android. Every App has to integrate all those programs itself in its own sharing dialog. This means that you can only share to whatever the App writer was aware of.
Likewise, there are no URL namespaces. If you get a Google Docs link in mail (not GMail, which tries to work around this), it will be opened in the browser. YouTube links? Open them in the browser. Google+ links? Open the browser, too. It would be much more valuable to use the dedicated apps for those purposes instead so people can use the service more efficiently, especially given the painfully slow page loads.
To address this problem, App implementors have written the most useless workarounds. If you click on a link in the Twitter app, a new embedded browser will be launched inside Twitter, because Twitter doesn't want to lose everything which was currently open. That makes sense for Twitter, but not as a whole, especially since that embedded browser lacks some controls and is really awkward to use. Especially as you now have two back buttons. And you can't switch to a different tab from the main browser instance, because it is not the browser.
Another issue is copying and pasting. Just like in early Linux days, it works part of the time and sometimes you get inexplicable results. Some Apps just don't seem to care though and just don't offer copying and pasting. I would have expected this to work ok in anything implemented after 1993.
To add insult to injury, apps which don't get the focus for a while are quit. This is quite annoying when you use a Jabber client on the phone, because you have to get it back into the foreground every couple of minutes to prevent it from quitting and being disconnected. As a workaround, many Jabber clients send you push notifications a minute or two before they're terminated. But that's nothing more than an ugly, annoying hack and far from the nice integration of Jabber clients as background tasks in Android.
Notify … but about what?
Notifications (”push messages“) are another issue where the current solution is unbearable. It appears that every app has its own notification process which cannot communicate with the main process. This goes even as far as to add a counter to the app icon. For example, you have 2 Twitter notifications. They are displayed on your screen lock, although truncated. You unlock the screen and find that the Twitter icon has a small ”2“ besides it, indicating that two unread notifications have been received.
Then you open Twitter and you don't see anything at all. It doesn go to the replies tab because apparently the App doesn't know you want that. You open the replies tab and realize it doesn't have your reply yet because it hasn't been reloaded since the message arrived. Given that Twitter ran in the background, that's kindof logical, but it isn't helpful and not a way in which I would want to implement my Apps. And it doesn't just affect Twitter: it's everywhere! TweetDeck has it, Mail has it, GMail has it, even the App Store has it. If we're fetching all that data for the notification, why can't we just have it in the app as well? What kind of notifications are those? Especially given the poor data transfer rates of the iPhone you really don't want to wait for all your replies to be downloaded again.
Even worse, it's quite difficult to actually follow notifications because when you click on one, all the others tend to go away. So you will know that something happened but you have no way of following up without looking through your phone and installed Apps. Why?!
And since I mentioned the icon: there is no reasonable way to sort a list of generic things other than alphabetically. The phone knows the language of the user and thus the sorting alphabet to use. Yet all apps appear in the order in which they have been installed. That's cool if you just installed something, but in a few days you won't remember if you installed SecureChat before or after pterm. So either let the user categorize the icons into desktops, with grouping functions, all by themselves, or don't assume anything and just order the icons alphabetically. I know that SecureChat comes after pterm in the latin alphabet.
If you buy an iPhone, you must be rich
Another really big minus is the pricing of iPhone apps. On Android, you get a great variety of good apps for free. For example, you have ConnectBot, a decent SSH client which someone implemented. People like to share stuff for Android for free. And the average Android app in the market costs CHF 1.99, so not terribly expensive.
On the iPhone, the general idea appears to be ”You paid a lot of money for your phone, so you can pay a lot of money for your Apps“. The most reasonable SSH client appears to be pTerm, which costs CHF 5.-. It's merely a port of PuTTY to the iPhone, so it's based on Open Source software, yet you pay more for it than you pay for a loaf of bread.
The regular iPhone port of the RealVNC viewer is sold for CHF 10.-. It's even twice as expensive as the already-expensive SSH client. It costs more than a loaf of bread and a decent piece of cheese. Nagios clients cost between CHF 15.- and 20.-. A client for a web interface which lets you view fields and click buttons.
In this respect the so-called ”Genius“, a function in the App Store which advertises you Apps you could buy, becomes even more ridiculous.
Welcome to the iCloud, where everything
And then there's the iCloud. I already mentioned all the fun I had with trying to create an account there. Once I had my account I couldn't import my calendar from anywhere. Because why would you want to do that? Now that you have an iPhone you can make totally new, more shiny friends!
Then I tried exporting an ical file from a web site and importing it into iCloud. The phone didn't really know what an .ics file is supposed to be, so I tried using the web interface. It's full of fancy features for creating calendar events, but the one thing it cannot do is importing calendars. The data is siloed in the iCloud, no communication with the outside is permitted. No matter which way.
What went well
There are two things iPhones are really good at. The first is good support for customer apps by companies. For example, Crédit Suisse so far only released their online banking App for iPhones, not for Android. The same goes for the german institute Deutsche Bank.
(On the other hand, small indy apps like Soup tend to be more widely available on Android.)
The other thing is podcasts. Apple has had a lot of time to implement a good podcast App, and so far there appears to be no good equivalent for Android. There are some podcast apps, some of which even work ok. But none of the tested ones have the comfort of the Apple Podcast App at this precise moment.
It is possible to use the iPhone as ones primary device for a period of time. However, the discomfort of doing so and the various annoyances suggest it is not a good idea. I was extremely happy when I could finally pick up my Galaxy Nexus and use it again. All in all, the iPhone feels like the bad phone hardware from 2005 mixed with an operating system from 1993, which is not a very pleasant experience.
Especially given the high price, required involvement (owning and maintaining a Mac, buying MacOS upgrades, buying an iPhone, buying apps, etc.) and the high risk of damaging the phone, it seems a rather questionable investment.
In my opinion, the iPhone needs a couple of years to come to the same level that other phones already have. The entire operating system needs to be better integrated (like Linux desktops, for example). The hardware needs a revamp and needs to catch up with recent developments like gorilla glass and covered switches, or more sturdy hardware in general.
Gender Liberalism: what got us here won't get us there
A frequently promoted strategy for tackling the issue of world hunger is liberalism. Just like any other -ism, it takes an idea to its extreme. And the idea is that you basically just need to make everything in the market equal, and equal opportunities for everybody are the result. So far though, there appear to be major gaps between the ideals and the actual results: people are still starving as you're reading this, on this very planet.
The same principle is frequently promoted as a solution to the problem of gender equality. Basically, if everybody agrees that it's ok to hire women into technical jobs, we will no longer have a huge bias in this job area.
There are many reasons why this doesn't work, and none of them is inherent to our species in any biological aspect.
Education for failure
Education of women is a vast topic and basically starts with a childs first breath. Because that's precisely the moment when we start learning new things, regardless of our gender. Starting from their birth, the child will observe its environment very closely and make observations. Those are not only based on its own family and things people say, but also actions. For example, if all women around the kid never pick up a hammer to build something themselves, and all men do, the kid concludes that hammers are for men. This gets even worse if Daddy keeps taking hammers, or for that matter, keyboards out of Mommys hands and does things for her. Very bad impression, don't do it for the sake of your kids future.
In the life of an average girl, there are more aspects preventing a more informed relationship with technology though. For example, there's all these people telling girls that technology is not meant for them or that they wouldn't be good at it. (Fatal misinformation; various women I know are extremely skilled at building things.) Even worse, the cognitive processes of a woman are set up to expect failure in those situation by people telling them that they are going to fail anyway. And especially when people tell them ”I told you you wouldn't make it!“.
Everybody fails. Girls do, boys do, hermaphrodites do and whoever else you could possibly think of does it too. Failure is normal. Failure gives you an opportunity to analyze what you did and improve it. And improvement is good, after all. This is the message which needs to be carried whenever somebody fails, regardless of the topic and other questions like gender.
There are other ways parents set their girls up to have less of a chance to succeed in technology. A frequently observed pattern is to deprive them of the opportunities to learn. We all know of various cases where a boy was playing Doom 3, World of Warcraft, whatever computer game you can imagine all day on his own computer and would just be left alone. Likewise, I know of many cases where girls were using the family computer (because they never received their own one) to try and write programs or attempt to install server software to try and run their own test server, and were told after two hours to get away from the computer.
”You are spending way too much time in front of that computer! Go play with your friends or take a walk!“
These stories are from girls who actually made it as far as to write a program of their own or run their own web server on localhost. It's hard to imagine how many girls never get there because they don't have enough time in front of the computer to figure it out before they get frustrated. Or because they're not allowed to run their own software on the family computer. Sure, some of this also hits boys, but excessive use of computers by boys is more widely accepted.
Peer Pressure away from what Matters
The time in school is typically spent around a group of other girls who are already frustrated with technology and a group of boys and teachers who throw around the same old phrases which discourage involvement with technology. Such an environment, just like all of the previously mentioned environments, is toxic to any interest in technology.
There's not just the circumstance that no other member of the peer groups will want to be involved in having fun with technology. Additionally, any involvement will be punished verbally (”What, you're playing with computers? Eww!“). And even boys who appreciate some involvement with technology frequently choose words which are more of an insult than a compliment (”You're doing this quite well for a woman“).
If you ever worked in different types of environments, you might have noticed the effect yourself. If you're surrounded by unmotivated people and people who aren't very skilled at what they're doing, they are slowing you down too, and you will never get as much done as you usually would. Even worse, you will learn a lot less over the years, because the typical tasks are scaled down for the size of the average mind in the team. So if you're the most intelligent person on the job, you're not very likely to grow (except perhaps in leadership skills).
On the other hand, if you surround yourself with people who are better than you at something, you will learn a whole lot from them, and your productivity and learning curve will appear to be boundless. You will start to feel like you've never seen the world so clearly.
This is however not the typical environment of a girl in school. Typically, they're surrounded with other girls who don't want to have any contact with technological challenges. So the mind suffers.
And then there's the problem that most boys aren't very well trained in not assuming leadership, and that teachers don't attempt to teach that skill either. So when people work in pairs on a computer or work bench, boys tend to take the keyboard or dremel away from the girls, and generally take a more commanding role in the team. This means that the girls tend to get less to do and just watch the boy fulfill his task. She only gets the tasks the boy assigns to her. And typically, proper judgment isn't applied in those situations.
Life doesn't end with school. Eventually, girls become women and will start looking for a job. And there, part of the problem is that hiring for tech companies is frequently done by members of management who consider themselves technical. In tech startups, it is even done by technicians themselves. Thus, this area, too, is male dominated.
And now women are struck by the same problem foreigners are. Studies have shown that recruiters are significantly more likely to hire people who are more like them, and in case of male recruiters that would be men. Unfortunately, this means that more men will become tech recruiters in the future. And it makes women less likely to find a tech job. This produces gaps in the CV which are filled either by unemployment or non-tech jobs, where women have an easier time getting hired.
Unfortunately, the same recruiters will then hold this against the applicants. If they didn't spend all of their time on tech jobs, it will be assumed that their lives are unsteady, making them less likely to get the tech job. This means that women in technology get less tech experience through jobs on average. Add to that all the prejudice against women for having the capacity to become pregnant, which is another big reason why they don't get jobs. Or the prejudice that women are more prone to depressions — who wouldn't get depressed with such terrible prospects?
No Heavy Administration
And even on the job itself there are problems. If a woman takes a job as a sysadmin, for example, it is not infrequent that her male colleagues are reluctant to assign some of the heavier server-lifting work in the server room, because it's a male-dominated domain and muscles are invovled. So the men alone carry the server into the server room, mount the rack slides, slide it in, hook it up and start the installation (unless automated away, which is happening way too rarely, but that's another point).
So in many companies, only the menial tasks of clicking up 1000 similar users or making coffee are left to the women. And the effects are devastating: I've seen women with a diploma in computer science and a CCNA certificate working for an ISP who were making coffee and carrying files from office to office. Because only men were entering the server room. Ever. This of course means that even though these women are on the job, they are refused the privilege to gather experience.
This has even wider effects when it comes to upgrade training courses. Women who fell into the trap to be kept away from the real experience may be perceived by management and human resources as not yet having achieved their full in-house learning potential. So they might not win the fight over the few free seats in that network management course, because a male colleague already has a lot of experience and is perceived as the superstar who's just the right network administrator.
Of course, none of the points mentioned above apply necessarily to all women. Some of them get more lucky and end up in a really great company where they can do good stuff and gather a lot of experience. I am happy for every one where this is the case. The purpose of this article is to point to these effects and to outline their consequences.
What can we do better in hiring then?
Even though women are inherently as capable as men, the effects mentioned above have serious consequences. They mean that, unlike with a man, you cannot generally expect a woman interested in technology to have gathered a lot of experience at home and during childhood. There are simply reasons why some of them cannot take advantage of their childhood to gather tech experience. The same is by the way true for men who grew up in extremely conservative families who banned all use of technology from their homes, and the likes. So if in doubt, you should always treat the applicants as such.
It also helps to be more lenient on the CV. Women might have gaps in their tech career, or even not have worked at all for monthes. This might be because someone was looking at their application, just like you, and made some wrong decisions.
Another point is experience. Typically, if someone was working on a job for a longer time, you expect a certain level of experience from them. And you will check if the experience of the applicant matches your expectations. For example, a man who worked as a wind tunnel engineer for 5 years is expected to be able to make a lot of good estimations about aerodynamics, or to make good designs just from his good judgment.
Since however women in tech are frequently left with the menial tasks, this means that a lot of the time they never had the chance to gather as much on-the-job experience as you would expect from a man who has worked the same time on the same job. Be it because she wasn't allowed to operate the wind tunnel, or be it because her colleagues always took away her keyboard when something important happened.
So you cannot trust the regular rules for past experience and career development. Still, you have to find some metric to determine if the applicant is going to be a valuable resource to the company or not. After all, you don't want to just hire anybody. So what can you do to determine if the applicant has what you need?
The question you should ask yourself is a question which should generally be asked more frequently in job interviews. ”Does the applicant have the ability to learn what she needs on the job?“
Most of the time this is a very interesting question which is widely neglected. Most companies are different and run different applications or produce things in a different way. Experience can give you a lot of help in learning the ways of your new job, but it is in no way all you need. All these companies which throw out a list of 50 words the applicant must be familiar with forget that the company will probably have their very own framework built around PostgreSQL or something like that. It is much more important that you determine whether or not the applicant will ever learn to use your framework.
Good tech employees always learn on every job. It should be the biggest and most verified part of the job description. If people don't learn on their job, the job is evidently boring and the person should get a more suited one.
Please note that women quota aren't covered in this text because I have no idea about this topic. Whether or not they are a good idea, I hope that mankind will follow whichever path yields the better result.
You may have noticed that what is written above is quite controversial. It basically says that women need special treatment, must be nourished and brought on to the jobs, and that you should keep your expectations lower. This typically raises the suspicion that women indeed aren't up to the job and aren't hired for their skills. And when they are hired, they have to wonder if they're really good enough or if they just got the job because of gender questions.
The answer is: if you manage to find an employee who can learn your ways quickly and understands what needs to be changed and how, it is a very good employee, regardless of the gender. But right now we have this gap and all the effects associated with it which pull very forcefully to keep the gap open for as long as possible. In order to bridge over this gap, some special treatment is required for some amount of time until we just truly work together in an environment which is free of prejudices and provides equal opportunities to members of any gender or non-gender.
Right now, we're unfortunately too far from that to just ignore the whole problem and wait for it to go away on its own.
The Apple Experiment: Lowering Expectations
After days of struggling with the account creation, I had to realize that I could simply not have an iCloud account. The Windows toolbar, which was my last, best hope for a clean iCloud account, could not create them, only log in to them. So I had to create an Apple ID based on my current Google Apps mail address. After opening Mail, I could then finally create my iCloud address, but the account was tied to my Google Apps mail account.
But at least I could finally start using the iPhone. I connected it to the wireless (I was in a different place now) which worked just as nicely as you would expect. Except the keyboard is a bit quirky because, unlike the Hacker Keyboard on Android, it doesn't come with number keys or symbols without first switching to some different mode. If you have complicated passwords, this can be very time-consuming.
Come to the iTunes store (if you can)
I then went on to install various apps on the phone. The App Store suggested a number of usefull apps from Apple which I should install by all means. In order to do that, though, I first had to sign my Apple ID up with the iTunes store. This procedure involved giving my home address, credit card and phone number and verifying some mail I received in my Gmail (rather than the iCloud account, so I had to go back to my laptop to confirm my account creation).
While the iTunes store asked for a backup mail address, it complained when I entered my Google Apps address because even though this address would be the backup for my iCloud account, it was still associated with the Apple ID, so I had to enter a third, distinct mail address to satisfy the iTunes store.
Unfortunately it took me some time to find my home phone number and by the time I had it ready, the screen lock had kicked in. I unlocked the screen and found that this going into the screen lock had closed the iTunes account creation wizzard and I was staring at the regular start page of the App Store. The advertisement for the useful Apple apps was gone for good, I couldn't find it anymore. So I decided to install an app and was asked to type all the iTunes account details in again.
Luckily, I made it on time this round so I didn't have to repeat the process. Now I was finally able to install the free DB Navigator app I required for my trip to Hamburg the next day. Then I tried to install the SBB app and was asked to come up with 3 security questions and their answers. All of the questions were completely useless and could be figured out easily by anyone with enough knowledge of my life. Like, what was the first rock concert you ever went to? Really?
Then I had to find the SBB app again and tell the App Store one more time to install it. Luckily, it didn't have any further questions and just fulfilled my request.
Big Podcast disappointment
I then found, installed and launched the Podcast app. Since I was going to Hamburg the next day, I would want something to listen to while traveling. The Podcast app contained a slightly sorted list of completely random podcasts, but it appears that the collection was big enough that I could find some interesting ones by searching for a bit. I added them to my list and one of them started playing immediately and rather loudly, which earned me some slightly embarassing stares.
I then set the alarm clock and went to bed, attaching the phone to the charger for the first time since its recent repairs. On the next morning, I was woken up by my esteamed phone at the right time. However, the battery was still at a relatively low rate. It had not been charged over night, again. It seems I will have to send the phone in for repairs another time.
I caught the bus to the station and boarded the train to Hamburg. I made sure to take my Sennheiser phones with me on the train so that I could listen to some of the podcasts I had selected the previous day. When I was looking through the list, all of the episodes were gray. It turned out that the default setting was not to download them ahead of time for you to listen, but to let you do that manually. (At the same time, the default setting would download them only on wifi, not on 3G when you're someplace outside).
Unfortunately, I was in a foreign country (Germany), so I couldn't download the episodes from the train. Also, consdidering how full the 3G network in Zurich is during commuting times, I would expect most people to have difficulties doing that in the first place. I reconfigured the Podcast app and continued my trip without anything to listen to.
So I wonder, will I be able to get along with this phone? I haven't yet found any offline street maps for Germany, like OpenStreetMap (OSMAnd) for Android. Also, will I ever have a phone which works for more than a few days, until the battery is drained and cannot be recharged again?
Read more about this in the next episode…
The Apple Experiment: Day One
About a week ago, I received an iPhone 4S with 32GB of storage. I immediately spent another CHF 40.- to get a rubber bumper and a display protection foil for the phone, in order to avoid it being destroyed in an accident or by being transported. A good iPhone user would possess such protectors, so I should have them too. They're ridiculously expensive though: CHF 20.- for the bumpers and CHF 20.- for the display protection foils.
And up for repairs
The charge of the battery was relatively low, about 20%, when I received it, so I plugged the phone in to charge it. While the phone was running on external power then, the battery wouldn't charge. As I carried it around, the power would continue to drain. Eventually, it reached zero, but the phone still wouldn't charge, even after several hours on the charger. So I brought it in to be repaired.
After a week in repairs I received the phone back just today, configured to the Apple ID of an employee of the shop which fixed it. The phone firmware had been reset and I was promised the phone would charge again. I haven't yet had an opportunity to try though.
Now that I had the phone back, I had to reset it and configure it for my own use. The option for resetting the phone could be found easily in the settings menu. Then, I was greeted with the iPhone setup screen. I slid to unlock, selected language and country, then configured the wireless network. I enabled location services and configured the iPhone as new. Then I was asked for an Apple ID.
I didn't have one, so I chose to create a Free Apple ID. The first question I was presented with was the birth date. The date is selected using three adjustment wheels, which is highly unpleasant if you were born at the other end of the month and year, a long time ago. A combined adjustment wheel and number editor would certainly be a relief here.
Then I had to type in my name — no surprises here. However, the name was split up into first and last name, which doesn't work for all cultures. It's a bit surprising that Apple forces this specific name format although they should have gathered quite some experience in dealing with different cultures by now.
Either way, I do have a first and last name, so I entered them. I was then asked whether I want to enter a mail address or create an iCloud account. Trying the full Apple experience, I had to go for the iCloud account. A quick question for the mail address later I received an error message: «Can't Create Apple ID: Your Apple ID could not be created because of a server error.» Clicking «Ok» on this message lead back to the screen which asks if you want to create an Apple ID or use an existing one.
Left without an option, I went to the Apple web site using Safari on my Mac Mini and was greeted with a very large video of Steve Jobs promoting various products to a kind of music. I couldn't find any other controls on the web site so I patiently watched the video till the end — which wasn't exactly easy, since the buffer kept running out. At some point the video was over and started again.
Hovering the mouse randomly over the browser window I discovered that there was a cross button on the upper left corner of the screen which would only appear when the mouse was hovered over it. Clicking that button loaded the regular Apple web site. However, that site didn't show the slightest sign of Apple ID creation mechanisms.
The apple in the menu bar brought back the video of Steve Jobs, which wasn't particularly helpful. The other menu items (Mac, iTunes, etc.) didn't mention Apple IDs either. In the Apple Store, there was a menu which mentioned «Accounts» and contained a menu point named «Account Home Page». The following page was more centered around orders from the Apple Store (looking at order lists or modifying or canceling orders), but there was a link to a page for changing the mail address of an Apple ID account.
The following page asked me to log into my Apple ID and offered an option to create one. I had finally found it! But the «Create Apple ID» page only allowed to create an ID with an existing, external mail account. No mention of iCloud anywhere.
So I used Google to find the iCloud service, which was apparently located at icloud.com. The web site didn't offer to create an account directly though. It asked you to create the account from the Mac or from an iOS6 device. The alternative was to use some iCloud tool bar on Windows. Since however I didn't have Windows, I couldn't follow that route.
The MacOS way required to open the system settings dialog. In that dialog, there was supposed to be a point called iCloud. I couldn't find that point though and it turned out that MacOS version 10.7.4 was required. I didn't have that version at hand, so I abandoned that road as well.
I may try finding a Windows installation to attempt the third way, but right now it appears that I cannot pursue the road I wanted to take due to a server error. Does that mean that my experiment is already over?
Read more about this in the next episode…
The Apple Experiment
In order to know better what I'm talking about, I have started an experiment: I got myself an iPhone.
Now that your shock has faded, let me explain. First of all, it is not a new iPhone. I got an iPhone 4S for cheap from someone who just received it back from repairs as a replacement drive. So my phone was either new or refurbished, so as good as new. I still have about a year's worth of warranty left.
The purpose of this experiment is for me to figure out and document what the life of an Apple user is like. So I'm not going to jailbreak my iPhone or use any other methods to make my life easier or to make the iPhone work more like I want it to. I want the full dosis, and I want to try out every detail Apple is throwing at me. And I will try not to resort to my good old Galaxy Nexus for the period of the experiment, even when I want to.
And with that, wish me some luck on my challenge.
Every year on the first of August
Switzerland is celebrating the first of August again. For the 721st time in a row, Switzerland is aging one year. And for the fifth time, people received a letter from the conservative party (UDC).
Two years ago, on November 28, 2010, the people of Switzerland decided to adopt the UDC motion for compulsory deportation of «criminal» foreigners, that is, foreigners who violated the criminal law. Since then, the federal government was trying to work out a way to implement this motion into law without violating any human rights and without trampling too much on the rights of foreigners.
Lack of any notion of proportion
This is a very hard problem. UDC wants the motion to be implemented as-is into Swiss law. This is however a clear violation of human rights, because it makes it extremely easy for everyone to kick any foreigner they don't like out of the country by alleging their involvement with a petty crime. The current Swiss law already covers the case where a foreigner commits serious violations of the criminal code. The extent to which the violations are serious has to be determined by a judge on a case-by-case basis. However, the motion would change this. Any crime, even a petty crime, would automatically lead to deportation. If this is put into context with the most recent attempts by the conservative forces all over the world to put anything they don't like into criminal law, the implications are exorbitant.
Think about ACTA. It was an attempt, supported by the Swiss institute of Intellectual Property, to put criminal sanctions on copyright violations. This means if you mess up a quote from a book in your publication, you don't only get to pay damages to the original author, but you also get automatically deported out of Switzerland and back into your home country.
There were similar attempts to put patent violations into criminal law. Note how extremely difficult it is nowadays to avoid running into patent violations when you develop any kind of products. If you implemented a web shop, for example, that would definitely get you deported.
Think about the cybercrime convention. If you use a media player to display DVDs you purchased on your laptop, that's a criminal offence (circumvention of copyright protection) and you will get deported.
Think about the hacker tools legislation. If you're a security researcher or a system administrator and you possess exploit code to do your daily job — definite deportation.
UDC still pushing
UDC however announced that, in their opinion, the Swiss government has been too slow in implementing their motion into law. Thus, they've sent out letters to every household in Switzerland (including the criminal foreigners and everyone else) asking for signatures for a new motion to implement the old motion as it was written down.
This is an even more difficult motion than the last one. A lot of time has to be devoted to making sure the new legislation will be in accordance with the basic human rights and with international treaties Switzerland has signed in the past. It is also very important that this new legislation doesn't lead to mass deportations or a mass exodus of foreigners who bring a lot of money into the country and add a lot of expertise the small, largely rural 7.6 Million people nation of Switzerland just cannot offer all by itself. New laws take their time, and this one is so very precarious that it most definitely shouldn't be rushed.
But more than that, UDC knows that complex legal matters take more than 1.5 years. This suggests that their main intention behind pushing this is to get exactly the legislation they had written down in the original motion, before the council or the parliaments get a shot at merging it with their own ideas and making it «weaker» so it can actually work without the detrimental effects UDC had in mind when drafting it.
As UDC is pushing right now, there can only be 3 outcomes from this law: a mass exodus, mass deportations or mass naturalization. This would give UDC a better argument to discriminate against naturalized citizens with their initiative proposal to give them differently colored passports and take away some of their citizen rights.
The destructive desktop — Linux in trouble?
Linux on the desktop has come a long way. The Gnome and KDE communities have built themselves a big, very powerful set of tools to build on. And using these tools, they created an enormous amount of software for a large number of different purposes.
Then they discovered that there is a lack of formality in the RPC mechanisms available under UNIX like operating systems. The Shared Memory IPC provides just shared memory and a little flow control, which is tedious. The sysvmsg API is still very inconvenient when communicating between various different processes, especially if they're arbitrary. Sockets work much better in that respect and have a well-defined API, but it is still relatively hard to exchange data over them.
UNIX offered the SUN RPC API which was used to implement NFS, among other things. However, it was ”just“ an RPC implementation and not a real service based middleware. Especially among Universities, a rather complex method of doing RPC had become fashionable: CORBA.
At this time, the only available Open Soure CORBA ORB was Mico. However, Mico still lacked some of the desired features and didn't support a lot of programming languages so the Gnome developers decided to implement their own ORB called ORBit.
The KDE people faced a very similar issue. However, instead of implementing all of CORBA, they developed a much smaller, more lightweight protocol called DCOP. DCOP was more tailored to be used for communicating between the different applications.
So the Gnome developers wanted to reduce the complexity of their protocol as well and started working on a protocol which was supposed to join the advantages of DCOP and CORBA. The result was called the Desktop Bus (dbus) protocol. Instead of complete remote objects it just offers remote interfaces with functions that can be called.
esd and PulseAudio
The sound system underwent a similar development. Initially, the operating system provided an API called the Open Sound System. It is based on a read-write device in /dev and a number of IOCTLs. It was the same API which is found in the BSD and Solaris operating system.
However, the Linux incarnation of OSS was a particularly simplicistic one which only supported one sound channel at the same time and only very rudimentary mixing. As a workaround, the community came up with a daemon which accepted sound samples and mixed them in software: the Enlightened Sound Daemon (esound). This daemon even acquired network capabilities so people could stream whatever they wanted over the network to other computers and play it there without having to resort to systems like the Networked Audio System (NAS). The KDE developers went even further and implemented an audio system on top of DCOP called MCOP.
Over time, the Linux kernel developers came up with a new API to control the various details of the sound card, have many different volume settings and to be able to mix in hardware. It was called the Advanced Linux Sound Architecture (ALSA).
Then, Gnome and KDE developed APIs to abstract the uses of OSS, esound and ALSA: gstreamer for Gnome and Phonon for KDE. Since gstreamer depended heavily on the Gnome libs and phonon on the KDE libraries, the rest of the community had to either adopt or try to keep up with the ever changing sound backends. Esound was deprecated and replaced with Pulseaudio, which triggered yet another shift of APIs.
Over time, more and more subsystems started getting DBus based frontends. hald was added to detect hardware properties. It turned into an official dependency for X.Org and was subsequently replaced with DeviceKit. PackageKit was added as a generic API to instruct the system to find and install packages through a DBus interface. ConsoleKit replaced the regular session and pseudoterminal management environment. PolicyKit imposed additional restrictions on system calls and allowed other people to do privileged operations without changing to the superuser explicitly. sssd is now in the process of replacing PAM as an authentication framework which is also contacted through DBus, without the need to use the system authentication and session management API (PAM and NSS, mostly).
NetworkManager added a new abstract way to configure network devices, such as network cards, wireless LAN or 3G. Like everything else, it provides a DBus interface for executing various operations, such as discovering wireless LANs, connecting to a network and awareness if the computer is currently connected to a network. Various GUI programs such as Firefox, Pidgin, Gajim and similar tools use NetworkManager to clean up their caches and reconnect after the network connection was terminated. They also go to some sort of offline mode if NetworkManager tells them to, in which they don't attempt to connect to the network and try to do whatever you want locally (e.g. queuing messages to be sent, displaying web pages from cache, etc.)
Another addition was systemd, which now replaces System V init and all other types of init derivates on various distributions. It is an init daemon which reads services from a database, somewhat like Solaris' svcadm. However, for starting and stopping services and telling the system to shut down or do something else, systemd has a DBus interface. The old /dev/initctl interface is no longer supported.
And most recently, there was a new addition to the pool: Journal is a service which replaces syslog and which exposes a DBus API for logging messages into a binary log (instead of a plain text log like syslog did). The adoption rate among desktop and other programs is great because now finally everybody can use their favorite API to log, grant log permissions, search logs etc. The world is becoming more awesome every single day.
Or did it? RedHat Enterprise Linux (RHEL) is a Linux distribution tailored for long-term support environments — which includes corporate desktops — and servers. Ubuntu LTS is following the same goal: to provide a modern distribution with long-term support for use in corporate workstations and servers.
The latest versions of both Ubuntu LTS and RHEL ship with NetworkManager for managing their network connectivity. If you don't use NetworkManager, a number of programs will refuse to connect to the Internet or behave in various weird ways. More so, a lot of system services now depend on NetworkManager and won't start unless it is running. And if you run NetworkManager, it starts periodically messing up any local system configuration. So you're basically bound to use NetworkManager.
So you install a server in headless mode (Wait, the installer won't typically let you do that anymore. But let's assume you do it nonetheless because your server doesn't have a graphics card anyway, it's attached to a Cyclades SSH serial port adapter like any other one of your UNIX servers.) Then you try to figure out how to configure NetworkManager from the command line. There's no tool in the entire distribution which lets you do that.
So from some time in the past you remember that you used to use a program called cnetworkmanager to operate NetworkManager from the command line. You install it and — the DBus API changed since the program was written, so the DBus call fails with a not-very-helpful error message.
So the only way to actually use NetworkManager is to use nm-applet, an X11 system tray application. You install your i3, you install your stalonetray and you start nm-applet — hey, it works! Now you can finally connect to the network. And if you wonder how you were supposed to install these packages without network access: by periodically calling ifconfig and ip route add until you finally managed to fetch all the data before NetworkManager would mess it up again.
So you have a bit of a more complex network configuration and need to add routes or, even better, use OSPF to find routes to some targets which don't go straight via some default gateway. (Why? Perhaps because you wanted to run Linux on your default gateway.) Fire up Quagga and you will see how Quagga struggles to add routes while NetworkManager struggles to remove them again. Part of your packets make it to their destination. Also keep in mind that you're now running X11 and a network management GUI on your router!
Now to your road warrior laptop. For simplicity sake and because you already exchanged SSH keys anyway, you decide to connect to your company via SSH-based VPNs. If you do that from the command line, NetworkManager gets very angry with you and does stupid things to ensure you can't put your default route over the new VPN device, or even use it at all.
So you have to use NetworkManager, which only supports vpnc and OpenVPN. However, Open Source vpnc servers are pretty much nonexistent and OpenVPN requires you to either set up a complete PKI or live with ridiculous preshared key algorithms. tinc supports simple public/private key algorithms, but it is not supported by NetworkManager. So the only way to make VPN work is to migrate to OpenVPN and to maintain your own PKI.
There are many more such effects with the new interfaces but these examples should suffice for now.
The effect of all those changes are numerous. For one, it is no longer possible to run the system without a graphical user interface unless you plan to invest a huge amount of work and to throw out most of your system support. If you want to get vendor support, this is not the way you will want to go.
You also can't implement complex network or authentication setups anymore. The number of possible combinations in the configuration has been significantly reduced by removing options which are not typically used for desktop systems. Also, since the APIs have a tendency to change very frequently, typically, only genuine supported Gnome or Ubuntu/Fedora software tends to work on the long run. If you try to use an alternative which has an user interface you prefer or has a feature you want, you will find very frequently that it is trying to call some DBus interface which is no longer implemented or has a different set of parameters.
Even worse is if you try to use any window manager that is not KDE or Gnome. Both KDE and Gnome launch a very large amount of daemons which are required by a number of applications — pulseaudio, an user dbus session (in addition to the system dbus instance), gnome-settings-daemon, etc. pp. Many programs also require support from applications which exist as tray icons, so you need to find an application to emulate the Gnome tray — and not all of them do it correctly.
Also, many of the advanced features like suspending the laptop when closing the lid or other ACPI events are no longer implemented as shell scripts in /etc, but have moved to be DBus APIs implemented by Gnome and KDE. The reason is that it becomes much easier to display things on the screen, but it also means that the /etc scripting API is rotting away and will not work on the long run. So if you want your laptop to suspend when you close the lid, your window manager must implement it.
Even worse, some of the applications don't react very well under window managers which are not KDE and Gnome because they don't implement the original X11 protocol directly and rely on so-called window manager hints.
Debugging DBus based systems
Which brings us to debugging. Even if your API uses DBus, it is not necessarily bug free. So under DBus based systems, you will sometimes see very weird interactions which seem to come out of nowhere, and it is absolutely not clear to you what happened.
You can use dbus-monitor to get an idea of what is going on on the DBus, but if you have some weird interaction you typically have no clue what the name of the DBus call you're looking for may be, so you start dbus-monitor without any filters only to discover that there is a huge amount of traffic, some of which is log messages.
So you try to read it from the logs but they are binary in some format Journal is writing…
This makes the system appear very opaque to everybody who's trying to take a serious look at it and fix problems. The result is that even people like me start going for solutions like ”Restart the application“ or ”Delete the configs“, because debugging a problem becomes extremely time consuming and the interactions between the different applications are no longer well-defined and obvious. This was one of the very basic design principles of UNIX.
Effects on other operating systems
A very common reaction when people hear that the Linux distributors are doing something crazy is to say, ”Who cares, I'll just use my NetBSD/FreeBSD so this won't affect me“. This, however, is only partially true.
The problem is that even users of FreeBSD and NetBSD want to use some of the software which was implemented for one of the desktop environments will have to find ways to make the DBus services work and react in the correct way. Jared McNeill attempted this with the NetBSD port of DeviceKit, but most operating systems aren't designed to support the kind of APIs involved. As a result, it becomes extremely difficult to support such software, and makes all operating systems more like Linux if they want to be able to run this type of software.
This is the exact opposite of the design principles of standards like POSIX and the Single UNIX specification. These specifications set a common high-level ground for all operating system interfaces, but leave the implementation details up to the systems. In order to honor the thought put into these design principles, the system shouldn't depend on anything other than a C API either so the implementation details are entirely up to the implementor.
More than that, this again affects choice and diversity. One of the biggest arguments the Linux community has brought up for migration from Linux was diversity, but right now, Linux implementors are competely ignoring this plea of their own. Instead, they come up with, well, ”proprietary“ Open Source software which locks people to use Linux and Gnome/KDE.
And this change in design principles is something which should be changed very soon. The current tendency towards DBus interfaces is actively harming the more proficient users in the various ways they made for themselves. Linux and UNIX have always been about the ease of customization. Gnome and KDE are both based on the idea that this only confuses the first-time users and should not be offered, which is fine. However, these window managers are now forcing themselves onto the users, and limiting the user's choice of operating system to just one. This is harmful and obliterates most of the advantages UNIX and Linux systems have given us.
So if you believe in the principles behind UNIX and Open Source, please don't write software which requires any of the Gnome/KDE and DBus API. Writing X11 programs with xcb and proper RPC APIs like SUNRPC or Thrift should be more than good enough. So, please support choice and freedom by implementing programs the right way instead of the Linux/Gnome/DBus way.
Frustration with the Thecus N5200
After a recommendation from a friend, I recently bought myself a Thecus N5200 NAS for home use, to replace the sluggishly slow Netgear ReadyNAS. Along with it, I bought 5 2TB hard disks, which should be enough to give me 6TB or more of storage on my home directories.
So I installed the hard disks and booted it up. I created a big RAID 6 volume over all disks and then realized that it wasn't helping a lot because, while there was a menu option to enable NFS support in the first place, there was none whatsoever to export my new file system via NFS. Also, showmount confirmed that it wasn't exported.
Follow the manual
As I couldn't find anything in the online help or the user manual about exporting file systems to NFS, I found Thecus N5200 Debian on the Chaoswiki and tried to follow the procedure outlined there. However, it turned out that my NAS was running a much more recent version of the Thecus supplied Linux distribution and couldn't install any of the mentioned packages. Also, Thecus itself doesn't seem to offer any SSH server.
Do It Yourself, maybe?
So since the whole thing is just an i386 which runs Linux I decided to try and go in to fix things up myself. I installed Debian onto an SD card and tried in kvm whether it boots up fine and configures the system. Then I got myself an adapter from PC Engines to mount the SD card into the Thecus NAS and tried to boot it up.
Well, so much for the theory. The system did something but there was no output on any of the two serial consoles, ever. Not even the firmware of the box write anything anywhere. The system is really hard to interact with. And while, in qemu, I get a serial console, it didn't work at all in the Thecus.
And while the network card was configured and the firmware installed, nothing moved on that front either. According to Running Debian on Thecus n5200 on wpkg, the only way to tell what the NAS is doing seems to be to solder a VGA adapter onto the mainboard and attach a monitor.
Picking up the pieces
So to summarize, so far I wasted more than CHF 1'000.- and 10 TB of space. All I got in return is a brick which sits on the ground and can only share files with Windows boxes. Yes, I know, most systems can mount SMB shares, but that's really not an option.
So I really wonder where this is going. What I'd love is a tiny box with space for 5 hard disks which can at least do 1 Gbit/s and can be integrated easily with my LDAP and Kerberos setup. In my world, this shouldn't be too much to ask.
However, instead of this, vendors seem to throw very expensive closed systems at us which attempt to prevent us to customize them or to really interact with them in any way which the vendor wasn't planning for. I don't see the reason though.
What's the loss for Thecus if I can easily install my own operating system, like I can with my ALIX? They aren't losing any money form this or anything. What's the cost of making everything output to the existing serial port? It's not like this is expensive to implement or anything. And the operating system used in the box suppports it just as well.
So far I'm getting the feeling that I just found a new brick I can use as a door stopper. But I guess I'll try to do some more stuff with it before I loot the hard disks. Perhaps I should buy a regular Mini-ITX PC and use that.
The Debian Installation of Doom
For tonight I set myself a rather trivial task: install Debian on a remote server which I can only netboot grml on, and where I have no console access. I figured it wouldn't be too difficult. However, Debian figured that it would be best to throw any possible obstacle my way.
I booted into grml, set up the partitions and file systems (/dev/md0 as /boot, /dev/md1 with lvm and the root file system). Then I mounted them in place and ran debootstrap. However, debootstrap said the configuration phase of the packages failed. So I chrooted into the system and ran dpkg --configure -a.
Then, I figured that Debian prefers to leave the most important programs uninstalled, so I ran apt-get install less bzip2 pax openssh-server sysklogd grub-pc linux-image-2.6.32-5-amd64. However, grub-pc decided it doesn't want to install itself successfully. A manual run of grub-install fixed this glitch as well. Then I set up a root password, enabled root logins for now in the ssh configuration and configured /etc/fstab and /etc/network/interfaces. I added a netconsole to the grub configuration, just in case.
Then I figured it was time to test the system, so I rebooted. However, I never saw the system come up. Also, the netconsole didn't log a thing. So I booted back into grml, installed kvm and tried to boot the system, only to find grub saying:
error while parsing number
So I fixed the device paths and re-ran update-grub2. Then the system booted but still didn't respond to ping, and had nothing on the netconsole. So I booted grml and saw that there was finally at least a dmesg.0 file. This file contained a number of hints:
netconsole: eth0 doesn't exist, aborting.
e100: eth0: e100_request_firmware: Failed to load firmware "e100/dm101m_ucode.bin"
So I figured that apparently the Debianists no longer ship firmwares anymore. I found a package called linux-firmware in the non-free repository and installed it. Then I rebooted and received ping replies from the system, but ssh never came up, the connection remained refused. So I booted into grml and found all logs in the chroot to be empty:
grml# ls -l
-rw-r----- 1 root adm 0 Aug 31 23:20 /mnt/vms-planck--root/var/log/messages
-rw-r----- 1 root adm 0 Aug 31 23:20 /mnt/vms-planck--root/var/log/syslog
-rw-r----- 1 root adm 0 Aug 31 23:20 /mnt/vms-planck--root/var/log/daemon.log
So I installed Dropbear and configured it to listen to port 2222, then rebooted. The system pinged, but ports 22 and 2222 remained refused. When running the system in kvm again, I discovered strange messsages though and found the root cause to be a popular debootstrap bug:
grml# cat /sbin/start-stop-daemon
echo "Warning: Fake start-stop-daemon called, doing nothing"
So I moved /sbin/start-stop-daemon.REAL back to /sbin/start-stop-daemon, but instead of typing reboot I accidentally typed poweroff, and now I have to wait for the hoster to flip the power switch of the server again before I can continue, so things will remain interesting.
I guess being bitten by debootstrap, defaults, grub, netconsole, firmware and start-stop-daemon on the same day was a bit too much. Time to watch V for Vendetta and go to bed.
Update: Note to those who didn't realize: no, I didn't watch the film, I just found it fitting.