MutByteArray
MutByteArray
data MutByteArray Source #
A lifted mutable byte array type wrapping MutableByteArray# RealWorld
.
This is a low level array used to back high level unboxed arrays and
serialized data.
type MutableByteArray = MutByteArray Source #
Deprecated: Please use MutByteArray instead
Pinning
data PinnedState Source #
Instances
Show PinnedState Source # | |
Defined in Streamly.Internal.Data.MutByteArray.Type | |
Eq PinnedState Source # | |
Defined in Streamly.Internal.Data.MutByteArray.Type (==) :: PinnedState -> PinnedState -> Bool Source # (/=) :: PinnedState -> PinnedState -> Bool Source # |
pin :: MutByteArray -> IO MutByteArray Source #
Return a copy of the array in pinned memory if unpinned, else return the original array.
unpin :: MutByteArray -> IO MutByteArray Source #
Return a copy of the array in unpinned memory if pinned, else return the original array.
Allocation
empty :: MutByteArray Source #
newBytesAs :: PinnedState -> Int -> IO MutByteArray Source #
pinnedNewAlignedBytes :: Int -> Int -> IO MutByteArray Source #
Access
sizeOfMutableByteArray :: MutByteArray -> IO Int Source #
Return the size of the array in bytes.
putSliceUnsafe :: MonadIO m => MutByteArray -> Int -> MutByteArray -> Int -> Int -> m () Source #
Put a sub range of a source array into a subrange of a destination array. This is not safe as it does not check the bounds of neither the src array nor the destination array.
cloneSliceUnsafeAs :: MonadIO m => PinnedState -> Int -> Int -> MutByteArray -> m MutByteArray Source #
Unsafe as it does not check whether the start offset and length supplied are valid inside the array.
cloneSliceUnsafe :: MonadIO m => Int -> Int -> MutByteArray -> m MutByteArray Source #
cloneSliceUnsafe offset len arr
clones a slice of the supplied array
starting at the given offset and equal to the given length.
pinnedCloneSliceUnsafe :: MonadIO m => Int -> Int -> MutByteArray -> m MutByteArray Source #
pinnedCloneSliceUnsafe offset len arr
unsafePinnedAsPtr :: MonadIO m => MutByteArray -> (Ptr a -> m b) -> m b Source #
Use a MutByteArray
as Ptr a
. This is useful when we want to pass
an array as a pointer to some operating system call or to a "safe" FFI call.
If the array is not pinned it is copied to pinned memory before passing it to the monadic action.
Performance Notes: Forces a copy if the array is not pinned. It is advised that the programmer keeps this in mind and creates a pinned array opportunistically before this operation occurs, to avoid the cost of a copy if possible.
Unsafe because of direct pointer operations. The user must ensure that they are writing within the legal bounds of the array.
Pre-release
unsafeAsPtr :: MonadIO m => MutByteArray -> (Ptr a -> m b) -> m b Source #
For use with unsafe FFI functions. Does not force pin the array memory.
Deprecated
asPtrUnsafe :: MonadIO m => MutByteArray -> (Ptr a -> m b) -> m b Source #
Deprecated: Please use unsafePinnedAsPtr instead.
nil :: MutByteArray Source #
Deprecated: Please use empty instead
Unbox
Unbox type class
The Unbox
type class provides operations for serialization (unboxing)
and deserialization (boxing) of fixed-length, non-recursive Haskell data
types to and from their byte stream representation.
Unbox uses fixed size encoding, therefore, size is independent of the value,
it must be determined solely by the type. This restriction makes types with
Unbox
instances suitable for storing in arrays. Note that sum types may
have multiple constructors of different sizes, the size of a sum type is
computed as the maximum required by any constructor.
The peekAt
operation reads as many bytes from the mutable byte
array as the size
of the data type and builds a Haskell data type from
these bytes. pokeAt
operation converts a Haskell data type to its
binary representation which consists of size
bytes and then stores
these bytes into the mutable byte array. These operations do not check the
bounds of the array, the user of the type class is expected to check the
bounds before peeking or poking.
IMPORTANT: The serialized data's byte ordering remains the same as the host machine's byte order. Therefore, it can not be deserialized from host machines with a different byte ordering.
Instances can be derived via Generics, Template Haskell, or written manually. Note that the data type must be non-recursive. WARNING! Generic and Template Haskell deriving, both hang for recursive data types. Deriving via Generics is more convenient but Template Haskell should be preferred over Generics for the following reasons:
- Instances derived via Template Haskell provide better and more reliable performance.
- Generic deriving allows only 256 fields or constructor tags whereas template Haskell has no limit.
Here is an example, for deriving an instance of this type class using generics:
>>>
import GHC.Generics (Generic)
>>>
:{
data Object = Object { _int0 :: Int , _int1 :: Int } deriving Generic :}
>>>
import Streamly.Data.MutByteArray (Unbox(..))
>>>
instance Unbox Object
To derive the instance via Template Haskell:
import Streamly.Data.MutByteArray (deriveUnbox) $(deriveUnbox [d|instance Unbox Object|])
See deriveUnbox
for more information on deriving
using Template Haskell.
If you want to write the instance manually:
>>>
:{
instance Unbox Object where sizeOf _ = 16 peekAt i arr = do -- Check the array bounds x0 <- peekAt i arr x1 <- peekAt (i + 8) arr return $ Object x0 x1 pokeAt i arr (Object x0 x1) = do -- Check the array bounds pokeAt i arr x0 pokeAt (i + 8) arr x1 :}
Nothing
sizeOf :: Proxy a -> Int Source #
Get the size. Size cannot be zero, should be at least 1 byte.
peekAt :: Int -> MutByteArray -> IO a Source #
peekAt byte-offset array
reads an element of type a
from the
the given the byte offset in the array.
IMPORTANT: The implementation of this interface may not check the bounds of the array, the caller must not assume that.
peekByteIndex :: Int -> MutByteArray -> IO a Source #
Deprecated: Use peekAt.
pokeAt :: Int -> MutByteArray -> a -> IO () Source #
pokeAt byte-offset array
writes an element of type a
to the
the given the byte offset in the array.
IMPORTANT: The implementation of this interface may not check the bounds of the array, the caller must not assume that.
pokeByteIndex :: Int -> MutByteArray -> a -> IO () Source #
Deprecated: Use pokeAt.
Instances
Peek and poke utilities
data BoundedPtr Source #
A location inside a mutable byte array with the bound of the array. Is it cheaper to just get the bound using the size of the array whenever needed?
Chains peek functions that pass the current position to the next function
Peeker (Builder BoundedPtr IO a) |
readUnsafe :: Unbox a => Peeker a Source #
pokeBoundedPtrUnsafe :: forall a. Unbox a => a -> BoundedPtr -> IO BoundedPtr Source #
pokeBoundedPtr :: forall a. Unbox a => a -> BoundedPtr -> IO BoundedPtr Source #
Generic Deriving
class PeekRep (f :: Type -> Type) where Source #
Instances
PeekRep (U1 :: Type -> Type) Source # | |
PeekRep (V1 :: Type -> Type) Source # | |
(PeekRep f, PeekRep g) => PeekRep (f :*: g) Source # | |
(MaxArity256 (SumArity (f :+: g)), KnownNat (SumArity (f :+: g)), PeekRepSum 0 (f :+: g)) => PeekRep (f :+: g) Source # | |
Unbox a => PeekRep (K1 i a :: Type -> Type) Source # | |
PeekRep f => PeekRep (M1 i c f) Source # | |
class PokeRep (f :: Type -> Type) where Source #
pokeRep :: f a -> BoundedPtr -> IO BoundedPtr Source #
Instances
PokeRep (U1 :: Type -> Type) Source # | |
Defined in Streamly.Internal.Data.Unbox pokeRep :: U1 a -> BoundedPtr -> IO BoundedPtr Source # | |
PokeRep (V1 :: Type -> Type) Source # | |
Defined in Streamly.Internal.Data.Unbox pokeRep :: V1 a -> BoundedPtr -> IO BoundedPtr Source # | |
(PokeRep f, PokeRep g) => PokeRep (f :*: g) Source # | |
Defined in Streamly.Internal.Data.Unbox pokeRep :: (f :*: g) a -> BoundedPtr -> IO BoundedPtr Source # | |
(MaxArity256 (SumArity (f :+: g)), PokeRepSum 0 (f :+: g)) => PokeRep (f :+: g) Source # | |
Defined in Streamly.Internal.Data.Unbox pokeRep :: (f :+: g) a -> BoundedPtr -> IO BoundedPtr Source # | |
Unbox a => PokeRep (K1 i a :: Type -> Type) Source # | |
Defined in Streamly.Internal.Data.Unbox pokeRep :: K1 i a a0 -> BoundedPtr -> IO BoundedPtr Source # | |
PokeRep f => PokeRep (M1 i c f) Source # | |
Defined in Streamly.Internal.Data.Unbox pokeRep :: M1 i c f a -> BoundedPtr -> IO BoundedPtr Source # |
class SizeOfRep (f :: Type -> Type) where Source #
Implementation of sizeOf that works on the generic representation of an ADT.
Instances
SizeOfRep (U1 :: Type -> Type) Source # | |
SizeOfRep (V1 :: Type -> Type) Source # | |
(SizeOfRep f, SizeOfRep g) => SizeOfRep (f :*: g) Source # | |
(MaxArity256 (SumArity (f :+: g)), SizeOfRepSum f, SizeOfRepSum g) => SizeOfRep (f :+: g) Source # | |
Unbox a => SizeOfRep (K1 i a :: Type -> Type) Source # | |
SizeOfRep f => SizeOfRep (M1 i c f) Source # | |
genericPeekByteIndex :: (Generic a, PeekRep (Rep a)) => MutByteArray -> Int -> IO a Source #
genericPokeByteIndex :: (Generic a, PokeRep (Rep a)) => MutByteArray -> Int -> a -> IO () Source #
deriveUnbox :: Q [Dec] -> Q [Dec] Source #
Given an Unbox
instance declaration splice without the methods (e.g.
[d|instance Unbox a => Unbox (Maybe a)|]
), generate an instance
declaration including all the type class method implementations.
Usage:
$(deriveUnbox [d|instance Unbox a => Unbox (Maybe a)|])
Simplified info about a Con
. Omits deriving, strictness, and kind
info. This is much nicer than consuming Con
directly, because it
unifies all the constructors into one.
Instances
Show DataCon Source # | |
Eq DataCon Source # | |
Ord DataCon Source # | |
Defined in Streamly.Internal.Data.Unbox.TH |
Simplified info about a DataD
. Omits deriving, strictness,
kind info, and whether it's data
or newtype
.
Instances
Show DataType Source # | |
Eq DataType Source # | |
Ord DataType Source # | |
Defined in Streamly.Internal.Data.Unbox.TH |
reifyDataType :: Name -> Q DataType Source #
Reify the given data or newtype declaration, and yields its
DataType
representation.
Serialize
class Serialize a where Source #
The Serialize
type class provides operations for serialization and
deserialization of general Haskell data types to and from their byte stream
representation.
Unlike Unbox
, Serialize
uses variable length encoding, therefore, it can
serialize recursive and variable length data types like lists, or variable
length sum types where the length of the value may vary depending on a
particular constructor. For variable length data types the length is encoded
along with the data.
The deserializeAt
operation reads bytes from the mutable byte array and
builds a Haskell data type from these bytes, the number of bytes it reads
depends on the type and the encoded value it is reading. serializeAt
operation converts a Haskell data type to its binary representation which
must consist of as many bytes as added by the addSizeTo
operation for that
value and then stores these bytes into the mutable byte array. The
programmer is expected to use the addSizeTo
operation and allocate an
array of sufficient length before calling serializeAt
.
IMPORTANT: The serialized data's byte ordering remains the same as the host machine's byte order. Therefore, it can not be deserialized from host machines with a different byte ordering.
Instances can be derived via Template Haskell, or written manually.
Here is an example, for deriving an instance of this type class using template Haskell:
>>>
:{
data Object = Object { _obj1 :: [Int] , _obj2 :: Int } :}
import Streamly.Data.MutByteArray (deriveSerialize) $(deriveSerialize [d|instance Serialize Object|])
See deriveSerialize
and
deriveSerializeWith
for more information on
deriving using Template Haskell.
Here is an example of a manual instance.
>>>
import Streamly.Data.MutByteArray (Serialize(..))
>>>
:{
instance Serialize Object where addSizeTo acc obj = addSizeTo (addSizeTo acc (_obj1 obj)) (_obj2 obj) deserializeAt i arr len = do -- Check the array bounds before reading (i1, x0) <- deserializeAt i arr len (i2, x1) <- deserializeAt i1 arr len pure (i2, Object x0 x1) serializeAt i arr (Object x0 x1) = do i1 <- serializeAt i arr x0 i2 <- serializeAt i1 arr x1 pure i2 :}
addSizeTo :: Int -> a -> Int Source #
addSizeTo accum value
returns accum
incremented by the size of the
serialized representation of value
in bytes. Size cannot be zero. It
should be at least 1 byte.
deserializeAt :: Int -> MutByteArray -> Int -> IO (Int, a) Source #
deserializeAt byte-offset array arrayLen
deserializes a value from
the given byte-offset in the array. Returns a tuple consisting of the
next byte-offset and the deserialized value.
The arrayLen passed is the entire length of the input buffer. It is to be used to check if we would overflow the input buffer when deserializing.
Throws an exception if the operation would exceed the supplied arrayLen.
serializeAt :: Int -> MutByteArray -> a -> IO Int Source #
serializeAt byte-offset array value
writes the serialized
representation of the value
in the array at the given byte-offset.
Returns the next byte-offset.
This is an unsafe operation, the programmer must ensure that the array
has enough space available to serialize the value as determined by the
addSizeTo
operation.
Instances
Serialize TH
deriveSerialize :: Q [Dec] -> Q [Dec] Source #
Given an Serialize
instance declaration splice without the methods (e.g.
[d|instance Serialize a => Serialize (Maybe a)|]
), generate an instance
declaration including all the type class method implementations.
>>>
deriveSerialize = deriveSerializeWith id
Usage:
$(deriveSerialize [d|instance Serialize a => Serialize (Maybe a)|])
deriveSerializeWith :: (SerializeConfig -> SerializeConfig) -> Q [Dec] -> Q [Dec] Source #
deriveSerializeWith config-modifier instance-dec
generates a template
Haskell splice consisting of a declaration of a Serialize
instance.
instance-dec
is a template Haskell declaration splice consisting of a
standard Haskell instance declaration without the type class methods (e.g.
[d|instance Serialize a => Serialize (Maybe a)|]
).
The type class methods for the given instance are generated according to the
supplied config-modifier
parameter. See SerializeConfig
for default
configuration settings.
Usage:
$(deriveSerializeWith ( inlineSerializeAt (Just NoInline) . inlineDeserializeAt (Just NoInline) ) [d|instance Serialize a => Serialize (Maybe a)|])
Config
data SerializeConfig Source #
Configuration to control how the Serialize
instance is generated. The
configuration is opaque and is modified by composing config modifier
functions, for example:
>>>
(inlineSerializeAt (Just NoInline)) . (inlineSerializeAt (Just Inlinable))
The default configuration settings are:
inlineAddSizeTo
NothinginlineSerializeAt
(Just Inline)inlineDeserializeAt
(Just Inline)
The following experimental options are also available:
encodeConstrNames
FalseencodeRecordFields
False
inlineSerializeAt :: Maybe Inline -> SerializeConfig -> SerializeConfig Source #
How should we inline the serialize
function? The default 'Just Inline'.
However, aggressive inlining can bloat the code and increase in compilation
times when there are big functions and too many nesting levels so you can
change it accordingly. A Nothing
value leaves the decision to the
compiler.
inlineDeserializeAt :: Maybe Inline -> SerializeConfig -> SerializeConfig Source #
How should we inline the deserialize
function? See guidelines in
inlineSerializeAt
.
encodeConstrNames :: Bool -> SerializeConfig -> SerializeConfig Source #
Experimental
In sum types, use Latin-1 encoded original constructor names rather than binary values to identify constructors. This option is not applicable to product types.
This option enables the following behavior:
- Reordering: Order of the fields can be changed without affecting serialization.
- Addition: If a field is added in the new version, the old version of the data type can still be deserialized by the new version. The new value would never occur in the old one.
- Deletion: If a field is deleted in the new version, deserialization of the old version will result in an error. TBD: We can possibly designate a catch-all case to handle this scenario.
Note that if you change a type, change the semantics of a type, or delete a field and add a new field with the same name, deserialization of old data may result in silent unexpected behavior.
This option has to be the same on both encoding and decoding side.
The default is False
.
encodeRecordFields :: Bool -> SerializeConfig -> SerializeConfig Source #
Experimental
In explicit record types, use Latin-1 encoded record field names rather than binary values to identify the record fields. Note that this option is not applicable to sum types. Also, it does not work on a product type which is not a record, because there are no field names to begin with.
This option enables the following behavior:
- Reordering: Order of the fields can be changed without affecting serialization.
- Addition: If a
Maybe
type field is added in the new version, the old version of the data type can still be deserialized by the new version, the field value in the older version is assumed to beNothing
. If any other type of field is added, deserialization of the older version results in an error but only when that field is actually accessed in the deserialized record. - Deletion: If a field is deleted in the new version and it is encountered in a previously serialized version then the field is discarded.
This option has to be the same on both encoding and decoding side.
There is a constant performance overhead proportional to the total length of the record field names and the number of record fields.
The default is False
.
Other Utilities
data TypeOfType Source #
Instances
Eq TypeOfType Source # | |
Defined in Streamly.Internal.Data.Serialize.TH.Bottom (==) :: TypeOfType -> TypeOfType -> Bool Source # (/=) :: TypeOfType -> TypeOfType -> Bool Source # |
typeOfType :: Type -> [DataCon] -> TypeOfType Source #
data SimpleDataCon Source #
Instances
Eq SimpleDataCon Source # | |
Defined in Streamly.Internal.Data.Serialize.TH.Bottom (==) :: SimpleDataCon -> SimpleDataCon -> Bool Source # (/=) :: SimpleDataCon -> SimpleDataCon -> Bool Source # |
mkFieldName :: Int -> Name Source #
isUnitType :: [DataCon] -> Bool Source #
isRecordSyntax :: SimpleDataCon -> Bool Source #
wListToString :: [Word8] -> String Source #
_endOffset :: Name Source #
errorUnsupported :: String -> a Source #
errorUnimplemented :: a Source #
Common
mkDeserializeExprOne :: Name -> SimpleDataCon -> Q Exp Source #
RecHeader
mkRecSerializeExpr :: Name -> SimpleDataCon -> Q Exp Source #
mkRecDeserializeExpr :: Name -> Name -> Name -> SimpleDataCon -> Q Exp Source #
mkRecSizeOfExpr :: SimpleDataCon -> Q Exp Source #
mkDeserializeKeysDec :: Name -> Name -> SimpleDataCon -> Q [Dec] Source #
mkRecSerializeExpr :: Name -> SimpleDataCon -> Q Exp Source #
mkRecDeserializeExpr :: Name -> Name -> Name -> SimpleDataCon -> Q Exp Source #
mkRecSizeOfExpr :: SimpleDataCon -> Q Exp Source #
mkDeserializeKeysDec :: Name -> Name -> SimpleDataCon -> Q [Dec] Source #