Ugly Stool Rotating Header Image

Uncategorized

Windows 2008 R2 with the WNA3100

I could not find anything when I searched the Web after installing this wireless USB device, so just in case someone else is…the device does work on Windows 2008 R2, but you must enable Wireless LAN.

Server Manager –> Features –> Add Features –> Wireless LAN.

My First JSON Document

I am writing my first JSON document (welcome to the party), and looked for tools to help me write it.  I want a tools to validate the document’s syntax, and tell me when I am doing the wrong thing.

I didn’t find a whole lot.  I settled on the Perl library JSON::XS, which comes with the command line tool json_xs for pretty printing and validation (kind of like xml_pp from XML::Twig) .  I used js2-mode for EMACS to write the document.

My biggest disappointment is that JSON makes trailing commas optional.  JSON parsers can optionally support or not support this construct.  This is wrong.  It requires more code to not add the comma, then it does to just add the comma all the time.  The trailing comma is supported to assist with the machine generation of code.  For shame JSON, for shame!

This syntax is not universally supported.

{
    "abc": 123,
    "def": 456,
    "ghi": 789,
}

Strip the trailing comma for better parser support.  In other words, this really is the standard format.

{
    "abc": 123,
    "def": 456,
    "ghi": 789
}

My First C++: CMake, GoogleTest, and GoogleMock

When starting a new project I always consider how to build the code, and how to test it.  I can puzzle through Make after a fashion, but I am not a fan.  If the project is only going to exist on Windows I stop at Visual Studio and MSBuild.  If the project is ever going to run on Windows and/or Linux then start with a build system that provides portability.

CMake can generate different project files based on the OS and user desire.  For example, on Windows you can generate Visual Studio projects files (2008, 2010, etc), or NMake makefiles.  Google started a project call GYP as an alternative to CMake specifically for Chromium.  (See GYP’s wiki for a GYP vs. CMake opinion.)

With testing I am very familiar with the frameworks available on .NET and Java, but not for C++.  When I recently started a new C++ project to solve those ubiquitous block letter games on smartphones I wanted to know how I was going to test it.  After consulting the oracle I decided upon GoogleTest and GoogleMock.

It was a wee pain to figure out how to get everything to work together because I was (and still am) a CMake neophyte. 

Step 1: Create a program that needs to be tested.  I created a simple example (minus most of the implementation) that extracts links from an HTML document.

Step 2: Files

http_fetch.h defines a class that issues an HTTP GET of the specified URI and returns the contents of said URI as a string.

#ifndef __HTTP_FETCH_H__
#define __HTTP_FETCH_H__
#include <string>
class HttpFetch {
public:
    virtual ~HttpFetch() {}
    virtual std::string GetUriAsString(const std::string& uri) const {
        // TODO(cboumenot): implement
        return std::string();
    }
};
#endif // __HTTP_FETCH_H__

html_parser.h defines a class that parses the specified URI, and extracts all of the links found in the fetched HTML document.  It uses HttpFetch to get the data.

#ifndef __HTML_PARSER_H__
#define __HTML_PARSER_H__
#include <string>
#include <vector>
#include "http_fetch.h"
class HtmlParser {
public:
    HtmlParser(const HttpFetch &http) :
        _http(http) {}
    std::vector<std::string> GetAllLinks(const std::string& uri) const {
        // TODO(cboumenot): implement
        return std::vector<std::string>();
    }
private:
    HttpFetch _http;
};
#endif // __HTML_PARSER_H__

I wrote a test for the HtmlParser using GoogleMock and GoogleTest.  I use mocking to inject data into HtmlParser and assert its responses.  Just like .NET/Java mocking goodness.

#include <string>
#include <vector>
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include "http_fetch.h"
#include "html_parser.h"
using ::testing::Return;
class HttpFetchMock : public HttpFetch {
public:
    MOCK_CONST_METHOD1(GetUriAsString, std::string(const std::string& uri));
};
TEST(HtmlParser, OneLink) {
    char *html = "<html>"
    "<head></head>"
    "<body>"
    "<a href='/index.html'>index.html</a>"
    "</body>"
    "</html>";
    HttpFetchMock mock;
    HtmlParser parser(mock);
    EXPECT_CALL(mock, GetUriAsString("http://example.net"))
        .WillOnce(Return(std::string(html)));
    std::vector<std::string> links = parser.GetAllLinks("http://example.net");
    EXPECT_EQ(1, links.size());
    EXPECT_STREQ("http://example.net/index.html", links[0].c_str());
}
TEST(HtmlParser, NoData) {
    char *html = "";
    HttpFetchMock mock;
    HtmlParser parser(mock);
    EXPECT_CALL(mock, GetUriAsString("http://example.net"))
        .WillOnce(Return(std::string(html)));
    std::vector<std::string> links = parser.GetAllLinks("http://example.net");
    EXPECT_EQ(0, links.size());
}

A mock is defined by deriving from HttpFetch.  All mocked methods are made public (yes, C++ allows this).  Methods are defined using GoogleMock macros.  The GoogleMock documentation is great, and has loads of information.  Everything I remember doing with NMock/EasyMock I have accomplished with GoogleMock.  I would even venture that one can do more with GoogleMock.

A test is defined with the TEST macro, and are written as one would expect.  Expectations are made on the mock before any methods on the mock are called.  Assertions are made using the EXPECT_* macros.

Building these codes starts with creating a CMake build file called CMakeLists.txt.

## CMakeLists.txt
option(foo_build_tests "Build all of foo's unit tests." OFF)
project(foo)
cmake_minimum_required(VERSION 2.6.2)
include_directories("/usr/local/include")
link_directories("/usr/local/lib")
set(foo_SOURCES
    main.cc
    html_parser.h
    http_fetch.h)
add_executable(foo ${foo_SOURCES})
if (foo_build_tests)
    enable_testing()
    add_executable(html_parser_test
        html_parser_test.cc
        html_parser.h
        http_fetch.h)
    target_link_libraries(html_parser_test
        pthread)
    target_link_libraries(html_parser_test
        gmock
        gmock_main)
    target_link_libraries(html_parser_test
        gtest
        gtest_main)
    add_test(html-tests html_parser_test)
endif()

This recipe starts of by defining a build option called foo_build_tests.  When CMake is invoked by default it will not build the tests because this option defaults to OFF.  This can easily by changed by flipping this switch when CMake is invoked.

$ cmake –Dfoo_build_tests .

By default CMake will generated makefiles, so the only thing necessary to build the code is to run make.  Of the many nice things CMake does it also adds a dependency on the CMakeLists.txt file.  If this file is out of date with respect to the Makefile, the Makefile will be re-auto-generated.  There is no need to keep invoking cmake after every change to CMakeLists.txt.

$ make

The recipes sets the project name using the project() function.  It also specifies a minimum CMake version.  If one is not defined CMake will complain, and 2.6.2 is the current default for GoogleTest.

The recipe hard-codes include and link directories (/usr/local/{include,lib})because this is where I installed GoogleMock and GoogleTest.  (A better approach would have been to use the find_package() function.)

The main project executable is defined by the add_executable() function and the list of sources that make it up (.h and .cc).

The test, html_parser_test, is defined in a much more complicated manner.  The test is only one .cc file (that does not define a main()), and the appropriate header files.  But, this test requires the GoogleMock and GoogleTest libraries as well as the dependencies of these libraries of which there is only pthread.  The dependencies of an executable can be built up, and do not have to be defined all at once.  Use the function target_link_libraries() to make the invdividual dependencies much more explicit  (or not, your call).  A better approach to defining the test would be to write a CMake function or macro that handled automatically adding the dependencies.  See the GoogleTest source code for such an example.

Finally, the function enable_testing(), and add_test() tell CMake to generate a target called test, that can be invoked easily from the command line.

$ make test

Viola!  A sample prototype for new C++ projects.

wmic vs. dmidecode

I use a machine’s SM BIOS UUID to track it, and I get this information by using either wmic or dmidecode depending the OS currently running.  The problem is that these tools never agree on what the UUID is.  One always has to transform (swizzle) the value of one of the programs to make them agree.

For example, my VM reports the following UUID depending upon the tool that read it.

cmd> wmic PATH Win32_ComputerSystemProduct GET UUID
C6F26AA8-D3AA-544E-83B6-E312AD25E745
$ dmidecode --string system-uuid
A86AF2C6-AAD3-4E54-8EB6-E312AD25E745

Notice how the first 16 bytes (three fields) are transformed between the two.  Which one is the incorrect UUID, or rather which output do you always transform before using.  (Note: the definition of incorrect may not be correct.)  I never bothered to figure out why the difference existed.

I was reading the dmidecode source code, and came across the following comment which solved the mystery.

/*
 * As of version 2.6 of the SMBIOS specification, the first 3
 * fields of the UUID are supposed to be encoded on little-endian.
 * The specification says that this is the defacto standard,
 * however I've seen systems following RFC 4122 instead and use
 * network byte order, so I am reluctant to apply the byte-swapping
 * for older versions.
 */

I can now sleep better at night.

How to Create a WinPE ISO

Download the Windows Automated Installation Kit, or WAIK (pronounced “wake”) from Microsoft.  WAIK’s are versioned by the OS they are based on.  For example, there is WAIK for Vista/Server 2008, and another for WAIK for Windows 7/Server 2008 R2.  (WAIK for Windows 7/Server 2008 R2 SP1 was recently released.)

After installing WAIK execute the following instructions to build a WinPE x86 (32-bit) ISO image.  (You need to run as an administrator to execute some of these commands.)  WinPE is a stripped down Windows 7 OS and only supports the architecture it is built against.  For example, the amd64 (64-bit) does not support running 32-bit binaries even though the full OS does.  If the application you want to run is strictly 64-bit use amd64, likewise if it is a 32-bit application use x86.  (ia64 support is available too.).

cd "\Program Files\Windows AIK\Tools\PETools"
pesetenv.cmd
copype.cmd x86 c:\windowspe-x86
cd "\Program Files\Windows AIK\Tools\Servicing"
dism /Mount-Wim /WimFile:c:\windowspe-x86\winpe.wim /Index:1 /MountDir:c:\windowspe-x86\mount
copy ..\x86\imagex.exe c:\windowspe-x86\mount\Windows\System32\
dism /Add-Package /PackagePath:..\PETools\x86\WinPE_FPs\winpe-scripting.cab /image:c:\windowspe-x86\mount
dism /Add-Package /PackagePath:..\PETools\x86\WinPE_FPs\winpe-wmi.cab /image:c:\windowspe-x86\mount
dism /Unmount-Wim /Commit /MountDir:c:\windowspe-x86\mount
copy c:\windowspe-x86\winpe.wim c:\windowspe-x86\ISO\sources\boot.wim
..\x86\oscdimg.exe -n -bc:\windowspe-x86\etfsboot.com c:\windowspe-x86\ISO c:\windowspe-x86\windowspe-x86.iso

This is a basic WinPE with two packages added via the dism command.  The winpe-scripting.cab package installs code to run WSH scripts – think VBscript and JScript.  The winpe-wmi.cab package supports the execution of the WMI commands.  WMI is useful for interrogating the system’s resource from the command line.  Documentation for other WinPE packages is available at MSDN.

Step 6 copies the imagex.exe executable into the WinPE because it is not installed by default.  ImageX is used to copy disk/OS images from an existing machine, and for applying disk/OS images.  (Not sure why the default image does not include ImageX as it is the OS deployment tool.)

Device drivers can be installed into a WinPE image using the dism command.  For example, if the drivers have been unzip’ed into c:\drivers, the following command would install all drivers found into the WinPE mount point.  (The WinPE image must still be mounted, step 5, for this command to work.)  WAIK for Windows 7 includes the drivers for Hyper-V, such as the synthetic NIC.

dism /Add-Driver /Driver:c:\drivers /Recurse /ForceUnsigned /image:c:\windowspe-x86\mount

Installing other programs into WinPE is as simple as copying the binaries into the mounted WinPE image.  The program must be compiled for the target architecture (32 vs. 64).  The deployment process can be scripted using VBscript or JScript, but I think ruby is more expressive.  Download the ruby-1.9.1p378 binary for windows, and extract the contents into the mount point.  Rebuild the CD using oscdimg, and viola!

image

If you want to start a program when WinPE boot, simply start it from the startnet.cmd file located in \Windows\System32.

I would also recommend adding the UnxUtils because they are small and you get useful command line utilities (find and grep) in a limited WinPE environment.

It is interesting that WAIK includes an ISO creation tool – oscdimg.exe.  Unfortunately, the license is rather strict.

C:\Program Files\Windows AIK\Tools\Servicing>..\x86\oscdimg.exe ISO -help

OSCDIMG 2.55 CD-ROM and DVD-ROM Premastering Utility

Copyright (C) Microsoft, 1993-2007. All rights reserved.

Licensed only for producing Microsoft authorized content.

Syslinux

This is a post of me meandering through the Syslinux source code, there isn’t a real point.

Syslinux is most closely associated with Linux, but it actually supports the booting of other OS’s.  It can boot over the network (PXE), CD-ROM, and various file systems.

I am working on a project that utilizes Syslinux; the project must read and write Syslinux boot configuration files.  I need to understand the available options, and what they mean.  The best place to start is with the Syslinux documentation, and the relevant code.  (The real impetus for this project is to learn Scala.)

I was reading the parsing source code that I found in com32/menu/readconfig.c, but I could not find the code that configured the serial console.  The documentation has this to say about serial port configuration.

For the SERIAL directive to be guaranteed to work properly, it
should be the first directive in the configuration file.

Despite this statement, I could not find any code to parse the serial directive, so I started to ack.  I found the method __syslinux_get_serial_console_info(void), which seemed like a good start.  The prototype had an attribute that did not make sense to me – __constructor__.

void __constructor __syslinux_get_serial_console_info(void)
{
    static com32sys_t reg;
    memset(&reg, 0, sizeof reg);
    reg.eax.w[0] = 0x000b;
    __intcall(0x22, &reg, &reg);
    __syslinux_serial_console_info.iobase = reg.edx.w[0];
    __syslinux_serial_console_info.divisor = reg.ecx.w[0];
    __syslinux_serial_console_info.flowctl = reg.ebx.w[0];
}

I asked the oracle, and found out that this is a GCC-ism, that guarantees the function is invoked before main().  But, wait, there is even more coolness going on.  What about com32sys_ and intcall!

I went back to the oracle, and asked about interrupt 0x22.  I found out that it is typically reserved for DOS methods, so I assumed that Syslinux was hooking the interrupt to provide system calls.

After some more acking I found out the intcall function’s methods are the following. 

  1. interrupt handler to invoke
  2. register values to pass to the invoked system call; call executed is determined by the value in EAX.
  3. register results of the invoked system call

The actual system calls are setup in comboot.inc.  The assembly code calls into C code to do the heavy lifting.  Comboot is an API to extend Syslinux, and there are several projects that take advantage of it.

Back to the code…

Scala, SBT, and IntelliJ

I started a personal project to learn Scala, and I had zero idea where to start – how do I build Hello World and then continue to scale it.  I know how to do it with make/gcc, ant/Eclipse, and VisualStudio.  From reading various Scala projects on github and code.google.com it was clear that people where using some combination of sbt, maven, and IntelliJ.

I know nothing about Maven, but at first blush it is intimidating – I am sure it is my lack of knowledge.  I punted on maven.  sbt appears (stress on appears) simple, and it is written in and developers configure it by writing Scala.

The IntelliJ pick is more of an experiment to see how it compares to Eclipse.  I know not everyone thinks you should change editors.

There may be better ways to combine sbt and IntelliJ, but this one is fairly easy.  I assume sbt is already installed.

Step 1 – SBT

Create a directory to hold your project, and then create an empty project by invoking sbt.

$ mkdir test
$ cd test
$ sbt
Project does not exist, create new project? (y/N/s) y
Name: example
Organizatoin: org.example
Version [1.0]:
Scala version [2.8.1]:
sbt version [0.7.5]:
...
> exit

Note: SBT uses Scala version 2.7.7, but builds your project with Scala 2.8.1.

Step 2 – Build

Create a .scala file to control the building of your project, and to specify your project’s dependencies.

$ mkdir project\build
$ notepad project\build\project.scala

This file includes example dependencies for Squeryl.

import sbt._
class Project(info: ProjectInfo) extends DefaultProject(info)
      with IdeaProject {
  val mysql = "mysql" % "mysql-connector-java" % "5.1.15"
  val squeryl = "org.squeryl" % "squeryl_2.8.1" % "0.9.4-RC6"
  override def libraryDependencies = Set(
    mysql,
    squeryl
  ) ++ super.libraryDependencies
}

Step 3 – Plugin

Add the IntelliJ plugin to SBT.

$ mkdir project\plugins
$ notepad project\plugins\plugins.scala

Create the plugins.scala file, and add the appropriate reference.

import sbt._
class Plugins(info: ProjectInfo) extends PluginDefinition(info) {
    val sbtIdeaRepo  = "sbt-idea-repo" at "http://mpeltonen.github.com/maven/"
    val sbtIdea = "com.github.mpeltonen" % "sbt-idea-plugin" % "0.4.0"
}

Step 4 – Build

Create a Hello World application to build, and put it in src/main/scala/.

object Program {
    def main(args: Array[String]): Unit = {
        println("Hello World")
    }
}

Launch sbt, and fetch the dependencies. Execute the idea command to create an IntelliJ project.

$ sbt update
$ sbt idea
$ sbt run

Once the files have been created simply open the project using IntelliJ.  sbt can be executed from within IntelliJ using the idea-sbt-plugin

TFS vs. SVN

We switched from SVN to TFS at work a little over a year ago, and it was for the better for the most part.  There are some things that annoy me, but that’s also true when comparing SVN to hg or git.

PROS

  • TFS is very fast compared to SVN.  No need to wait for svn status, as TFS is explicitly told the files checked out for edit.
  • TFS allows the checked out edits to be temporarily put aside, and then brought back.  This is similar to git stash – TFS calls them shelvesets.
  • TFS allows an individual developer to easily create private builds.  Point TFS at a shelveset, and wait for the build output to show up.
  • The API is very easy to use.  Creating custom tooling around TFS is dead-simple.
  • All-in-one solution (good and bad) for source control, bugs, and build.

CONS

  • The world is moving towards offline distributed development, and TFS is decidedly not.  Checking a file out for edit requires connectivity with the TFS server.
  • TFS requires a lot of horsepower to run.  Our configuration involves three machines: SQL Server, SharePoint, and the App Server.
  • The omnibox has won!  I’ll tell you what I want and you figure out how to parse and retrieve the data.  Do not force me to create a query to find a bug, it’s ridiculous.
  • [minor] There are some annoying parts of the UI, such as not being able to sort a table by a specific column. 

I am not really sure how to phrase when TFS is better than SVN or not.  It sounds trite to say for Enterprise because I do not believe that to be true.  How about, if you are on the MS platform, and you want an all-in-one solution TFS is a good answer.  VS integration, source control, and the build server all work in concert with one another very well.  The bug management support is complete, albeit baroque.

Multiple WLAN’s with DD-WRT

I made a failed attempt to replace DD-WRT with OpenWRT, but unfortunately the wireless NIC’s are not supported.  When I returned to DD-WRT I decided to update to the latest recommended 24 preSP2 Build 14896 (04/23/10), and add support for multiple wireless LAN’s.  One LAN is dedicated to the devices I do not control or trust (TiVo, Wii, etc.), and the other is dedicated to my machines.

I followed the instructions on the DD-WRT wiki, and it worked great!  Per the instructions I used the Command Method to configure dnsmasq.

WCF REST Client for SmugMug – 3 of n

The last post covered my failed attempts (to me) to use an XML schema to generate the XML parser.  Instead of continuing down the path of an auto-generated parser, I chose a slightly different approach.  I pushed all communication activities into WCF until the input was .NET methods, and the output was .NET types.  The following graphic better illustrates how WCF was used.

wcf-post-01-a

The magic of the green WCF strip is done with the IClientMessageFormatter interface.  Obra implements this interface, and specifically overrides the DeserializeReply  method to pre-process SmugMug’s XML responses.  DeserializeReply is used to strip the outer <rsp/> element, and the <method/> element from the XML response.

This XML data…

<rsp stat="ok">
  <method>smugmug.images.get</method>
  <Images>
    <Image FileName="image-1.jpg" Format="JPG" id="1001">
      <Album Key="FxWoe" id="1000"/>
    </Image>
  </Images>
</rsp>

…is converted to this XML data.

<Images>
  <Image FileName="image-1.jpg" Format="JPG" id="1001">
    <Album Key="FxWoe" id="1000"/>
  </Image>
</Images>

The pre-processed XML data contains only the relevant type metadata.  All of the extraneous information has been removed.  Writing a .NET class to represent these data are simple.  A class to represent the <Images/> element is show below.  This class is automatically picked up by the WCF XML deserailizer due to the XmlRoot attribute, and because it is located in the same .NET assembly as the WCF client.

[XmlRoot("Images")]
public class ImagesContract
{
    private readonly List<ImageContract> image =
        new List<ImageContract>();
    [XmlElement("Image")]
    public List<ImageContract> Images
    {
        get { return image; }
    }
}

Transforming the XML data to an instance of a class is done by the default implementation of IClientMessageFormatter.  Obra implements the IClientMessageFormatter interface, but it does so only to pre-process the response.  Once the response is pre-processed, Obra passes the response to the default implementation of DeserializeReply.

Obra’s implementation of DeserializeReply also checks for SmugMug error responses.  An example response is shown below.  If an error is encountered, a exception is throw with the information contained in the response.

<rsp stat="fail">
  <err code="17" msg="invalid method"/>
</rsp>

I have shown a better (at least alternative) solution for consuming a REST (POX) web service using WCF.  If you are parsing XML in your client you can do much better.  WCF exposes hooks that obviate the need for it, and WCF makes it trivial to communicate with a web service in a more type-safe manner.  Error conditions can be caught and dealt with earlier, which should make client code more straight-laced.  WCF makes it easy to push communication code away from your client.

In the next post I will cover how Obra uses WCF to send data to SmugMug.

Page optimized by WP Minify WordPress Plugin