Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This cleanup of the I/O layer for parsing is overdue anyway.  There are far too many layers here, and scala's Reader[T] and PagedSeq[T] are never going to be fast enough. Everything that does scanning for characters must work more in the manner of a Java InputStream - with it's position, mark, and reset capabilities for backing up to a previously marked location. The 64-bit I/O layer must implement this InputStream-like capability and hide the management of finite-size buffers.

About Regular Expression Matching

DFDL's dfdl:lengthKind 'pattern', and dfdl:assert/dfdl:discriminator with testKind 'pattern' imply regular expression scanning of the input data stream.

Note that the regular expression Pattern and Matcher class do not Note that the regular expression Pattern and Matcher class do not operate on an unbounded InputStream nor Reader, but only on a finite CharSequence interface - realistically, this means using CharBuffer. The Matcher hitEnd() and other API features make it possible to identify when a match needs more data to determine the result. However, it is generally true that the Pattern and Matcher objects are incompatible with streaming of data.

See the section below about The Bucket Algorithm.

Handle Objects for Large Strings or HexBinary

Large atomic objects of type xs:string and xs:hexBinary cannot be turned into ordinary Java String and Array[Byte]. Rather, they must be some sort of small handle or proxy object. A tunable threshold should be available to tell Daffodil when to create a handle versus an ordinary String or Array[Byte].

The DFDL Infoset doesn't really specify what the [value] member is for a hexBinary object - that is it does not specify what the API is for accessing this value. Currently it is Array[Byte], but we can provide other abstractions. Also, the [value] member for type xs:string is assumed to be a java.lang.String, but we can provide other abstractions.

These handle objects would support the ability to open and access the contents of these large objects as java.nio.Channel or java.io.InputStream (for hexBinary), and java.io.Reader (for String).

When projecting the DFDL infoset into XML, these handle objects would have to show up as the XML serialization of the handle object, with usable members so that other software can access the data the handle is referring to. One example would be that the handle contains a fileName or URI and an offset (type Long) into it, and a length (type Long), and possibly the first N bytes/characters of the data.

This mechanism needs to work both for parsing and unparsing; hence, an API way of constructing these large-data handle objects is needed.

Infoset Events

Infoset elements must be produced incrementally by the parser. These can only be produced once surrounding points of uncertainty are resolved fully. An architecture for this is needed. There may be some limitations.

Cursor-style Pull API

The Daffodil API ProcessorFactory class has an onPath("...") method. (Currently only "/" is allowed as  a path.) This is intended to enable a cursor-like behavior if given a path that identifies an array. Successive calls to the DataProcessor's parse method should advance through the data one element of the array at a time, returning an Infoset each time which has as its root the successive InfosetElement items.

A cursor-style API caters to applications that are schema-specific more than generic applications. The notion here is that each parse action is some meaningful (to the schema) chunk of stuff.  Dealing with any points of uncertainty that the onPath(...path..) crosses becomes the application's problem in this sort of API.

Event-callback-style API

Because data formats can have points of uncertainty in them, an entirely XML-oriented pull-parser API can be problematic. See section below on Pathological Data Formats.

One design point: the lowest level API pushes the uncertainty back to the consumer - the establishing of known-to-exist or known-not-to-exist resolutions of uncertainty, well that's an event like any other event. So when a discriminator evaluates to true - that produces the discriminator-true event. Basically, the backtracking of the parser is visible as events, to the application, not hidden and invisible. Unwinding the stack from a failed element parse, well that's a different kind of end-element event. This allows applications to parse and process flawed data. An application could implement recovery points for example, such that it skips over broken data, and tries again from some place in the schema.

Incremental parse events are a lot like the Daffodil debugger's trace output in this case.  Call-back handlers are the common way to do this. I.e., application provides an object that implements a particular interface. Then, if the user application wants to do co-routines so that it looks like a flat stream of events, and not a nested call, they can do so.

The next layer up can implement a full StAX style API where no event is released until all points-of-uncertainty about it have been resolved. But I suspect many applications are going to want events out for inner elements even though the outermost point of uncertainty is not resolved. They want to be incrementally consuming and processing data even though it may happen later that a parse error indicates the overall structure is not correct. (Classic example: last record contains number of records, and it's not correct. Another example: file is damaged - but way down near the end, and most of the data is good.)

Pathological Data Formats

It's always possible to create a schema where there is a point of uncertainty right near the very top of the data. For both parsing and unparsing this is possible. For unparsing, data where the first record must contain the length in bytes of the entire data contents is the classic example. For parsing, it is data with deep-discriminators is the usual example, i.e., two schemas are v0.1 and v0.2 of some data format, and the only way you can tell the difference is that way down inside there's a date with slashes in it, vs. ISO notation. So the infoset corresponding to the parser of the whole file is pending until you detect that deep detail.

The Bucket Algorithm

Regular expression matching using the java.util.Scanner class does operate on java.io.InputStream (or ReadableByteChannel), and takes a charset argument as well. So a revised streaming input layer may have to accomodate use of java.io.Scanner. The java.util.Scanner implementation would need to be evaluated to see if it in fact can carry out very large matches. Specifically, DFDL image file formats may involve large BLOB objects (hexBinary) which have marker delimiters. These BLOBS can be much larger than any single Java object, and may actually be larger than can be accomodated in the JVM heap memory. The Input layer needs to accomodate identifying the ending position for an object of unlimited size.

See the section below about The Bucket Algorithm.

Handle Objects for Large Strings or HexBinary

Large atomic objects of type xs:string and xs:hexBinary cannot be turned into ordinary Java String and Array[Byte]. Rather, they must be some sort of small handle or proxy object. A tunable threshold should be available to tell Daffodil when to create a handle versus an ordinary String or Array[Byte].

Data objects larger than a single JVM object can store (e.g., video or images) may have to be represented in the Infoset by a proxy object. Standard streaming-style events normally produce simple values as regular objects representing the value. If a simple value is larger than a single JVM object can store, then a streaming API to access the value is needed.

The DFDL Infoset doesn't really specify what the [value] member is for a hexBinary object - that is it does not specify what the API is for accessing this value. Currently it is Array[Byte], but we can provide other abstractions. Also, the [value] member for type xs:string is assumed to be a java.lang.String, but we can provide other abstractions.

These handle objects would support the ability to open and access the contents of these large objects as java.nio.Channel or java.io.InputStream (for hexBinary), and java.io.Reader (for String).

When projecting the DFDL infoset into XML, these handle objects would have to show up as the XML serialization of the handle object, with usable members so that other software can access the data the handle is referring to. One example would be that the handle contains a fileName or URI and an offset (type Long) into it, and a length (type Long), and possibly the first N bytes/characters of the data.

This mechanism needs to work both for parsing and unparsing; hence, an API way of constructing these large-data handle objects is needed.

Infoset Events

Infoset elements must be produced incrementally by the parser. These can only be produced once surrounding points of uncertainty are resolved fully. An architecture for this is needed. There may be some limitations.

Cursor-style Pull API

The Daffodil API ProcessorFactory class has an onPath("...") method. (Currently only "/" is allowed as  a path.) This is intended to enable a cursor-like behavior if given a path that identifies an array. Successive calls to the DataProcessor's parse method should advance through the data one element of the array at a time, returning an Infoset each time which has as its root the successive InfosetElement items.

A cursor-style API caters to applications that are schema-specific more than generic applications. The notion here is that each parse action is some meaningful (to the schema) chunk of stuff.  Dealing with any points of uncertainty that the onPath(...path..) crosses becomes the application's problem in this sort of API.

Event-callback-style API

Because data formats can have points of uncertainty in them, an entirely XML-oriented pull-parser API can be problematic. See section below on Pathological Data Formats.

One design point: the lowest level API pushes the uncertainty back to the consumer - the establishing of known-to-exist or known-not-to-exist resolutions of uncertainty, well that's an event like any other event. So when a discriminator evaluates to true - that produces the discriminator-true event. Basically, the backtracking of the parser is visible as events, to the application, not hidden and invisible. Unwinding the stack from a failed element parse, well that's a different kind of end-element event. This allows applications to parse and process flawed data. An application could implement recovery points for example, such that it skips over broken data, and tries again from some place in the schema.

Incremental parse events are a lot like the Daffodil debugger's trace output in this case.  Call-back handlers are the common way to do this. I.e., application provides an object that implements a particular interface. Then, if the user application wants to do co-routines so that it looks like a flat stream of events, and not a nested call, they can do so.

The next layer up can implement a full StAX style API where no event is released until all points-of-uncertainty about it have been resolved. But I suspect many applications are going to want events out for inner elements even though the outermost point of uncertainty is not resolved. They want to be incrementally consuming and processing data even though it may happen later that a parse error indicates the overall structure is not correct. (Classic example: last record contains number of records, and it's not correct. Another example: file is damaged - but way down near the end, and most of the data is good.)

Pathological Data Formats

It's always possible to create a schema where there is a point of uncertainty right near the very top of the data. For both parsing and unparsing this is possible. For unparsing, data where the first record must contain the length in bytes of the entire data contents is the classic example. For parsing, it is data with deep-discriminators is the usual example, i.e., two schemas are v0.1 and v0.2 of some data format, and the only way you can tell the difference is that way down inside there's a date with slashes in it, vs. ISO notation. So the infoset corresponding to the parser of the whole file is pending until you detect that deep detail.

The Bucket Algorithm

(Note: this whole section on the Bucket Algorithm was written before examining the java.util.Scanner class, which  operates regex Patterns on java.io.InputStream. This may be a superior API to use for regex matching than Matcher, but matches/scans are most likely limited in size to the maximum size of a single Java object. To accommodate very large objects, DFDL needs to be able to scan to determine the ending position of objects larger than memory. )

Unfortunately, many operations such as regex matching operate only on finite CharSequence arguments. They don't take a java.io.Reader or java.io.InputStream as argument.

To get streaming I/O behavior, when the low-level operations require these finite objects, one must deal with finite buffers. This includes filling finite byte buffers from the raw input source, and filling finite CharBuffers from the byte buffers.

The input system needs to work in a manner analogous to this idiom involving glasses of water, pitchers, buckets, and a well, which is the ultimate source of all the water.

Parsing is analogous to getting a glass of water or koolAid.

A beverage (water or koolAid) is "poured" into glasses (DISimple infoset element values) from a pitcher(koolAid = text) or bucket (water = binary data).
Pour = any atomic I/O operation like trying to grab data that matches a regex, or grab N bytes, or grab N characters or grab N bits. (operations on the InputStream object)

If the glass is filled (called "overflow" - doesn't mean any spilled, but that the container is filled to the brim) we're done.

Any time a glass is only partly filled (called "underflow"), one must (text case) take the pitcher and refill it from the bucket. This can overflow so the pitcher is now full, but some water is left in the bucket, or this can underflow so the bucket has to be refilled from the well, then we go back to filling the pitcher from the bucket, and the glass from the pitcher. The water (binary data) case is similar, except with one fewer hop because there is no pitcher.

It's possible there is no more water in the well. In which case the bucket may come back partly full or even empty, and similarly the pitcher can come back partly full or even empty.
The glass can then be further filled (or not if the pitcher/bucket was empty).

The customer (parser) that is trying to get a glass of koolaid or water can then decide if this is ok (satisfied), another underflow, or an error (an error because it didn't get enough data, but there is no more to be had)

The well = InputStream or ReadableByteChannel
The bucket = ByteBuffer
The pitcher = CharBuffer (extends CharSequence)
The glass = Array[Byte] or StringBuffer (StringBuilder? the unsynchronized one)
KoolAid = text data
water = binary data
pour = carry out a primitive input operation

What makes this work is that the Input operations (pouring) are all restartable. That is, an Underflow can be detected where they do not have sufficient data to satisfy their operation (fill the glass). Hence, the pitcher or bucket is a finite thing, not an unbounded stream/channel.

Optimizations:

All Text: If the data is all text (whole format uses nothing but text), and the dfdl:encodingErrorPolicy='replace', then we can setup the Charset decoders to create unicode replacement chars on decode errors. Then we throw the koolAid powder down the well, and now we fill the pitcher directly from the well, bypassing the bucket. (We create a java.io.Reader on top of the java.io.InputStream/java.nio.ReadableByteChannel, which does the decoding directly.)

Small Data: (a less important optimization) If the data is small, we can memory map it. (Bucket IS the well), and if it is small and all text and dfdl:encodingErrorPolicy='replace', then we can create a giant pitcher filled once from the bucket, after which we can drop the bucket. (code for this exists today).

Cheapest possible: If the encoding is iso-8859-1 there is a further optimization. Each byte in iso-8859-1 directly maps one-to-one to the Unicode codepoint of the same numeric value. Hence we can create our own variant of CharBuffer which avoids creating an actual char buffer, and instead returns the bytes of the ByteBuffer, directly translating each byte into the corresponding unicode codepoint when accessed.

Additional issues/notes:

Overflow when filling CharBuffer from ByteBuffer (filling pitcher from bucket). Just indicates that the pitcher is full.
Underflow when filling CharBuffer from ByteBuffer means that the pitcher can hold more.  

dfdl:encodingErrorPolicy='error' - adds complexity to filling the pitcher from the bucket. Any given byte might experience a decode error. Without having source-code mods to every decoder for every encoding, we can't make this throw. Possible ideas: maybe it's ok to be slow in this case, and use a Scala Stream[Char], and decode characters one by one?

Inefficiencies to watch out for: In the general case of mixed binary and text fields, the bucket can be big (4 Meg?). But the pitcher shouldn't be big unless the data contains large character strings frequently. Consider a format with mixture of binary and text elements of average size 8 bytes/chars per element. Now imagine the pitcher is 4K characters in size. Every time we go to fill the pitcher from the bucket, we will decode perhaps 4K characters from the binary data. But then we use on average 8 of them, and then if the next element is binary, that pitcher is discarded (because we've bypassed it and gone directly to the bucket for binary data), so we might not have any way to re-use the pitcher of koolAid, so now we fill the pitcher again and decode 4K characters just to get on average 8 more.

Ways to fix:
* (Best idea) Implement our own character-by-character decode loop for the general case. Don't bother trying to preserve a CharBuffer, or amortize the cost of filling it. Just decode character by character and make that as fast as possible. This would work fine when our DFA is consuming characters one by one, and also makes it easy to implement dfdl:encodingErrorPolicy="error". It is inefficient for specified-length strings, where getting N characters could be as cheap (on average) as getting a string of length N from a CharBuffer.  This is effectively punting on the notion of a pitcher in the above bucket-algorithm.
* (another idea) Just use small pitchers (64 characters? - could do a micro benchmark to pick a reasonable size, or come up with some adaptive scheme.)