Interrupting Sphinx-4 speech recognition in continuous recognition mode

by timvasil 5/14/2011 8:20:00 PM

Sphinx-4 is a speech recognizer developed at Carnegie Mellon University.  Out of the box, it offers two modes of operation: batch ("frontend") and continuous ("epFrontEnd").  In contiuous mode, it performs decoding live based on, say, microphone input.

Unfortunately for me, epFrontEnd turns the Recognizer.recognize() method into a blocking call, and Sphinx-4's API provides no way of interrupting this method. I find this problematic in various scenarios, such as automated tests.  In such a test, I want to determine whether the recognizer recognizes the command correctly, incorrectly, or misses it entirely.  The "miss" case is the tricky one, as in this case the recognize() method just hangs indefinitely, waiting for more audio input.  

I found a way to work around this problem.  It involves inserting a custom data processor into Sphinx-4's data processing stack.

Here's how to do it in three steps:

Step 1:  Implement a custom data processor 

public class InsertableDataBlocker extends BaseDataProcessor
{
    List<Data> insertionDatas = new LinkedList<Data>();

    @Override
    public Data getData() throws DataProcessingException
    {
        if (!insertionDatas.isEmpty())
        {
            insertionDatas.remove(0);
            throw new InterruptException();
        }
        return getPredecessor().getData();
    }

    public void injectInterrupt()
    {
        insertionDatas.add(new DataEndSignal(0));
    }
}

Step 2:  Add this data processor to the processing stack

In the Sphinx-4 XML configuration file, place the processor right after the microphone processor in the stack.  

    <component name="epFrontEnd" type="edu.cmu.sphinx.frontend.FrontEnd">
        <propertylist name="pipeline">
            <item>microphone </item>
            <item>insertableDataBlocker </item> 
            <item>dataBlocker </item>
            <item>speechClassifier </item>
            <item>speechMarker </item>
            <item>nonSpeechDataFilter </item>
            <item>preemphasizer </item>
            <item>windower </item>
            <item>fft </item>
            <item>melFilterBank </item>
            <item>dct </item>
            <item>liveCMN </item>
            <item>featureExtraction </item>
        </propertylist>
    </component> 
 

Step 3:  Interrupt the recognize() method when desired

ConfigurationManager cm = new ConfigurationManager(getClass().getResource("config.xml"));
InsertableDataBlocker inserter = (InsertableDataBlocker)cm.lookup("insertableDataBlocker");
inserter.injectInterrupt(); 

Tags:

Java | Speech

Working with the Milonic menu API bolt-on

by timvasil 4/22/2008 2:34:00 AM

If you happen to be using the Milonic menu system, you may have noticed the availability of a menu API "bolt-on" module.  It's woefully underdocumented, and reveals just how ugly the architecture for this menu system really is.  For example, global variables abound and there's trickiness around when to pass in a menu's ref (i.e. index into the _m global array of menus) versus a menu's internal name.  This makes me very appreciative of the cleaner menu APIs I've seen in the successor JavaScript frameworks, such as Ext JS.  I'm using the Milonic system with some older code; I wouldn't recommend it for new projects.  I'm not sure I'd recommend Ext JS either, though; the Ext folks just did a bait-and-switch, moving from an LGPL licensing model to a dual GPL/commerical one.  That's damn nasty, but I digress...

First off, understand that if you happen to create any menus without menu items, the menu API will blow up in random ways, e.g. "_m[..] is undefined"-type error messages.  I had modified the API code to work around the problem until I figured out what was going on.  So be aware of that gotcha.

So, for those functions where you need a menu "ref," there's no API function to get it given a menu's internal name.  Nice.  Here you go...

function mm_getMenuRef(sMenuName)
{
 var sMenuName = sMenuName.toLowerCase();
 for (var i=0; i < _m.length; i++)
 {
  if (_m[i] && _m[i][1] == sMenuName)
  {
   return i;
  }
 }
 return -1;
}

Notice that -1 is returned if no menu with the name exists.

Now here's a function to delete a menu and all its descendants (not just children):

function mm_deleteMenuRecursively(sMenuName)
{
 var iRef = mm_getMenuRef(sMenuName);
 if (iRef >= 0)
 {
  var asNames = mm_getChildMenus(iRef).menus;
  for (var i = asNames.length - 1; i > 0; i--)
  {
   mm_deleteMenuRecursively(asNames[i]);
  }
  mm_deleteMenu(mm_getMenuRef(sMenuName));
 }
}

Happy menuing!

Tags:

Java

Exporting FusionChart images

by timvasil 2/5/2008 11:35:00 AM

FusionCharts is a flash-based animated charting package.  One of its recent features is the ability to export the chart images so end users can save them to disk.  Unfortunately the sample code provided gives only PHP and C# examples of how to do this, which isn't so handy if you're using Java. So, in the public interest, here is a Java of that code:

@Override
public void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException
{
    // Extract parameters
    int width = Integer.parseInt(req.getParameter("width")) - 1;
    int height = Integer.parseInt(req.getParameter("height"));
    String bgColorStr = req.getParameter("bgcolor");
    int bgColor = (bgColorStr == null || bgColorStr.length() == 0) ? 0xffffff : Integer.parseInt(bgColorStr, 16);
    String data = req.getParameter("data");
   
    // Build the bitmap image
    BufferedImage image = buildBitmap(width, height, bgColor, data);
   
    // Compress the image as a JPEG
    ImageWriter writer = ImageIO.getImageWritersByFormatName("jpg").next();
    ImageWriteParam writerParam = writer.getDefaultWriteParam();
    writerParam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
    writerParam.setCompressionQuality(0.95f);

    // Stream the image to the user agent
    resp.addHeader("Content-Disposition", "attachment; filename=\"FusionCharts.jpg\"");
    resp.setContentType("image/jpeg");
    ImageOutputStream imageOut = ImageIO.createImageOutputStream(resp.getOutputStream());
    writer.setOutput(imageOut);
    writer.write(null, new IIOImage(image, null, null), writerParam);
    imageOut.flush();
    imageOut.close();
}
   
private BufferedImage buildBitmap(int width, int height, int bgColor, String data)
{
    BufferedImage chart = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);

    String[] rows = data.split(";");
    int colIdx = 0;
    for (int rowIdx = 0; rowIdx < rows.length; rowIdx++)
    {
        // Split individual pixels
        String[] pixels = rows[rowIdx].split(",");
        colIdx = 0;
        for (int pixelIdx = 0; pixelIdx < pixels.length; pixelIdx++)
        {               
            // Split the color and repeat factor
            String[] clrs = pixels[pixelIdx].split("_");  
            int color = ("".equals(clrs[0])) ? bgColor : Integer.parseInt(clrs[0], 16);
            int repeatFactor = Integer.parseInt(clrs[1]);
           
            // Set the color the specified number of times
            for (int repeatCount = 0; repeatCount < repeatFactor; repeatCount++, colIdx++)
            {                      
                chart.setRGB(colIdx, rowIdx, color);
            }
        }
    }
   
    return chart;
}

Note:  I think there's a bug in the Flash image exporter.  It looks like it's reporting the width of the image to be 1 pixel greater than whan it actually is, hence the expression Integer.parseInt(req.getParameter("width")) - 1

Tags:

FusionCharts | Java

ResultSets vs. Hibernate

by timvasil 2/1/2008 8:14:00 PM

When writing Java code requiring object/relational mapping to an underlying relational database, I've always relied on Hibernate.  Recently, though, I tackled a small project dealing with only a few objects and primarily read-only access to the data.  I thought it'd be the perfect opportunity to use JDBC directly--to keep the project simple.  The code ended up looking pretty clean.  Here's a sample data access method to log in a user and provide some information related to the user, namely the Customer object associated with the user and all the "Sites" associated with that customer:

try
{
    final NamedParameterStatement stmt = new NamedParameterStatement(getConnection(),
            "SELECT u.*, c.*, cs.*, us.CustomerSiteId AS UserSiteId FROM tbl_users u" +
            " INNER JOIN tbl_customers c ON u.CustomerId = c.CustomerId" +
            " INNER JOIN tbl_customer_sites cs ON cs.CustomerId = c.CustomerId" +
            " LEFT OUTER JOIN tbl_user_sites us ON us.CustomerSiteId = cs.CustomerSiteId" +
            " WHERE LOWER(Username) = :username AND (:password IS NULL OR u.Password = :password)");
    stmt.setString("username", StringUtils.nonNullify(username).toLowerCase());
    stmt.setString("password", password);
    final ResultSet results = stmt.executeQuery();
    if (!results.first())
    {
        // No user found
        return null;
    }

    // Hydrate customer
    final Customer customer = HydrationUtils.hydrateCustomer(results);
   
    // Hydrate user
    final User user = HydrationUtils.hydrateUser(results);
    user.setCustomer(customer);
   
    // Hydrate customer sites and determine user accessible sites
    results.beforeFirst();
    while (results.next())
    {
        final Site site = HydrationUtils.hydrateSite(results);
        customer.addSite(site);
        if (results.getObject("UserSiteId") != null)
        {
            user.getAccessibleSites().add(site);
        }
    }
   
    if (updateLastLogonTime)
    {
        final NamedParameterStatement updateStmt = new NamedParameterStatement(getConnection(),
                "UPDATE tbl_users SET LastLogon = NOW() WHERE UserId = :userId;");
        updateStmt.setInt("userId", user.getId());
        updateStmt.executeUpdate();
    }
   
    return user;
}
catch (Exception e)
{
    s_log.error(e);
    throw new RuntimeException(e);
}
finally
{
    close();

There are a number of things to notice about this code:

  • Java doesn't support named parameters.  I had to write my own NamedParameterStatement class so I could perform parameter substitution using named parameters.
  • Though you don't see its implementation, my getConnection() method pulls a connection from a c3p0 connection pool.  Without Hibernate, I have to manage connection strings and the connection pool.
  • The "hydration" process (aka relational to object mapping) is handled by the HydrationUtils class.  This is a bit clunky and an annoyance to write, with a bunch of setter calls and data conversions (e.g. from java.sql.Date to java.util.Date).
  • The work to ensure there aren't multiple objects referring to the same row in a table is handled explicitly, i.e. the sites in the user's access list is pulled from the sites in the customer's list.  Hibernate would have handled this for me.
  • The underlying DDL script has to be written by me, and the embedded SQL limits portability.

Yes, there's a bit of work going on here.  Some of it is brittle.  For example, additions to the object model will not cause compile time errors, yet the HydrationUtils methods would need updating.  Such issues would likely not be caught until runtime.

As the application grew more complex and I found myself struggling to manage unique indexes and foreign keys in an ever-growing DDL script, I decided to switch to Hibernate.  Now, take a look at the same user logon code written Hibernate style:

return new TransactionRunner<User>() {
    @Override
    protected User doWork(Session session) throws Exception
    {
        // Prepare the user-fetching query--eagerly fetching associated customer and the customer's associated sites
        final Query userQuery = session.createQuery("from User as u " +
          " inner join fetch u.customer c" +
          " inner join fetch c.sites" +
          " left join fetch u.accessibleSites" +
          " where lower(u.username) = :username and (:password is null or u.password = :password)");
        userQuery.setParameter("username", username);
        userQuery.setParameter("password", password);
       
        // Fetch the user
        final User user = (User)userQuery.uniqueResult();
       
        // Update the user's last logon time (if so desired)
        if (updateLastLogonTime)
        {
            user.setLastLogon(new Date());
            session.flush();
        }
       
        // Replace persistent sets with regular sets for serialization
        user.setAccessibleSites(new HashSet(user.getAccessibleSites()));
        user.getCustomer().setSites(new HashSet(user.getCustomer().getSites()));
        session.setFlushMode(FlushMode.MANUAL);
        return user;
    }
}.run();

See the difference?

  • I wrote HQL instead of SQL to ensure portability.  No more NamedParameterStatement; Hibernate understands named parameters.  It also understands objects, so the query has an object-oriented feel to it.  I'm joining objects, not IDs.
  • There's no hydration code.  Hibernate does this for me.
  • Updating an object is as simply as changing its properties; I didn't have to write any SQL.
  • I didn't have to manage any databsae connection object explicitly.
  • I did write a TransactionRunner class to wrap the code in a transaction, so the begin/commit/rollback could happen at the appropriate time.  Writing data access methods within anonymous classes is a bit clunkly.  The alternative of using annotations with a pointcut-enabling component such as Spring crossed my mind, but I don't really have any need for most of its features at this point so I'll live with the callback.
  • I have to jump through a few hoops near the end of the method so I can get rid of Hibernate's persistent collections in favor of Java's built-in collections.  Even though Hibernate's collections implement the standard Java List, Map, and Set interfaces, they don't play nice with GWT, and I needed to serialize the object graph to a web client.

Overall I feel more comfortable with the Hibernate code.  I think it's cleaner and easier to maintain.  And next time I'm working on a simple little project, I won't be so quick to discount an O/R mapping tool; I believe it does save time, even for the small projects.

Tags:

Java | Hibernate | GWT

Java Annotations Gotcha

by timvasil 1/27/2008 3:47:00 PM

I'm well accustomed to .NET attributes, so when I was recently writing some code using the Java equivalent, annotations, I was surprised to find the reflection methods I called were not providing me with the annotation metadata I had so painstakingly defined.

As it turns out, I was missing a @Retention attribute on the custom annotation interface I defined.  In order to access the annotation at runtime via reflection, I needed to say:

@Retention(RetentionPolicy.RUNTIME)

This is intuitive enough, just not something a .NET developer would expect to have to do, since in .NET attributes are always available at runtime via reflection.

So, the complete definition of the annotation interface I needed to slap on an enum value is:

@Target(ElementType.FIELD)
@Retention(RetentionPolicy.RUNTIME)
public @interface ColumnName
{
    String columnName();
}

And reading the annotations on enum values, if present, is straightforward:

final Field[] fields = EnumType.class.getFields();
final String[] columnNames = new String[fields.length];
int idx = 0;
for (Field field : fields)
{
    final ColumnName columnNameAnn = field.getAnnotation(ColumnName.class);
    columnNames[idx++] = (columnNameAnn == null) ?
        ((Enum)field.get(null)).name() : columnNameAnn.columnName();
}

Tags:

Java

Search

Calendar

«  September 2014  »
SuMoTuWeThFrSa
31123456
78910111213
14151617181920
21222324252627
2829301234
567891011

View posts in large calendar

Recent comments

Archive