4cpp Lexing Library

Table of Contents

§1 Introduction

This is the documentation for the 4cpp lexer version 1.0. The documentation is the newest piece of this lexer project so it may still have problems. What is here should be correct and mostly complete.

If you have questions or discover errors please contact editor@4coder.net or to get help from community members you can post on the 4coder forums hosted on handmade.network at 4coder.handmade.network

§2 Lexer Library

§2.1 Lexer Intro

The 4cpp lexer system provides a polished, fast, flexible system that takes in C/C++ and outputs a tokenization of the text data. There are two API levels. One level is setup to let you easily get a tokenization of the file. This level manages memory for you with malloc to make it as fast as possible to start getting your tokens. The second level enables deep integration by allowing control over allocation, data chunking, and output rate control.

To use the quick setup API you simply include 4cpp_lexer.h and read the documentation at cpp_lex_file.

To use the the fancier API include 4cpp_lexer.h and read the documentation at cpp_lex_step. If you want to be absolutely sure you are not including malloc into your program you can define FCPP_FORBID_MALLOC before the include and the "step" API will continue to work.

There are a few more features in 4cpp that are not documented yet. You are free to try to use these, but I am not totally sure they are ready yet, and when they are they will be documented.

§2.2 Lexer Function List

§2.3 Lexer Types List

§2.4 Lexer Function Descriptions

§2.4.1: cpp_get_token

Cpp_Get_Token_Result cpp_get_token(
Cpp_Token_Array *token_array_in,
int32_t pos
)
Parameters
token_array
The array of tokens from which to get a token.
pos
The position, measured in bytes, to get the token for.
Return
A Cpp_Get_Token_Result struct is returned containing the index of a token and a flag indicating whether the pos is contained in the token or in whitespace after the token.
Description
This call performs a binary search over all of the tokens looking for the token that contains the specified position. If the position is in whitespace between the tokens, the returned token index is the index of the token immediately before the provided position. The returned index can be -1 if the position is before the first token.

See Also
Cpp_Get_Token_Result

§2.4.2: cpp_lex_step

Cpp_Lex_Result cpp_lex_step(
Cpp_Lex_Data *S_ptr,
char *chunk,
int32_t size,
int32_t full_size,
Cpp_Token_Array *token_array_out,
int32_t max_tokens_out
)
Parameters
S_ptr
The lexer state. Go to the Cpp_Lex_Data section to see how to initialize the state.
chunk
The first or next chunk of the file being lexed.
size
The number of bytes in the chunk including the null terminator if the chunk ends in a null terminator. If the chunk ends in a null terminator the system will interpret it as the end of the file.
full_size
If the final chunk is not null terminated this parameter should specify the length of the file in bytes. To rely on an eventual null terminator use HAS_NULL_TERM for this parameter.
token_array_out
The token array structure that will receive the tokens output by the lexer.
max_tokens_out
The maximum number of tokens to be output to the token array. To rely on the max built into the token array pass NO_OUT_LIMIT here.
Description
This call is the primary interface of the lexing system. It is quite general so it can be used in a lot of different ways. I will explain the general rules first, and then give some examples of common ways it might be used.

First a lexing state, Cpp_Lex_Data, must be initialized. The file to lex must be read into N contiguous chunks of memory. An output Cpp_Token_Array must be allocated and initialized with the appropriate count and max_count values. Then each chunk of the file must be passed to cpp_lex_step in order using the same lexing state for each call. Every time a call to cpp_lex_step returns LexResult_NeedChunk, the next call to cpp_lex_step should use the next chunk. If the return is some other value, the lexer hasn't finished with the current chunk and it sopped for some other reason, so the same chunk should be used again in the next call.

If the file chunks contain a null terminator the lexer will return LexResult_Finished when it finds this character. At this point calling the lexer again with the same state will result in an error. If you do not have a null terminated chunk to end the file, you may instead pass the exact size in bytes of the entire file to the full_size parameter and it will automatically handle the termination of the lexing state when it has read that many bytes. If a full_size is specified and the system terminates for having seen that many bytes, it will return LexResult_Finished. If a full_size is specified and a null character is read before the total number of bytes have been read the system will still terminate as usual and return LexResult_Finished.

If the system has filled the entire output array it will return LexResult_NeedTokenMemory. When this happens if you want to continue lexing the file you can grow the token array, or switch to a new output array and then call cpp_lex_step again with the chunk that was being lexed and the new output. You can also specify a max_tokens_out which is limits how many new tokens will be added to the token array. Even if token_array_out still had more space to hold tokens, if the max_tokens_out limit is hit, the lexer will stop and return LexResult_HitTokenLimit. If this happens there is still space left in the token array, so you can resume simply by calling cpp_lex_step again with the same chunk and the same output array. Also note that, unlike the chunks which must only be replaced when the system says it needs a chunk. You may switch to or modify the output array in between calls as much as you like.

The most basic use of this system is to get it all done in one big chunk and try to allocate a nearly "infinite" output array so that it will not run out of memory. This way you can get the entire job done in one call and then just assert to make sure it returns LexResult_Finished to you:



Cpp_Token_Array lex_file(char *file_name){
    File_Data file = read_whole_file(file_name);
    
    char *temp = (char*)malloc(4096); // hopefully big enough
    Cpp_Lex_Data lex_state = cpp_lex_data_init(temp);
    
    Cpp_Token_Array array = {0};
    array.tokens = (Cpp_Token*)malloc(1 << 20); // hopefully big enough
    array.max_count = (1 << 20)/sizeof(Cpp_Token);
    
    Cpp_Lex_Result result =
        cpp_lex_step(&lex_state, file.data, file.size, file.size,
                     &array, NO_OUT_LIMIT);
    Assert(result == LexResult_Finished);
    
    free(temp);
    
    return(array);
}
See Also
Cpp_Lex_Data

§2.4.3: cpp_lex_data_init

Cpp_Lex_Data cpp_lex_data_init(
char *mem_buffer
)
Parameters
tb
The memory to use for initializing the lex state's temp memory buffer.
Return
A brand new lex state ready to begin lexing a file from the beginning.
Description
Creates a new lex state in the form of a Cpp_Lex_Data struct and returns the struct. The system needs a temporary buffer that is as long as the longest token. 4096 is usually enough but the buffer is not checked, so to be 100% bullet proof it has to be the same length as the file being lexed.


§2.4.4: cpp_lex_data_temp_size

int32_t cpp_lex_data_temp_size(
Cpp_Lex_Data *lex_data
)
Parameters
lex_data
The lex state from which to get the temporary buffer size.
Description
This call gets the current size of the temporary buffer in the lexer state so that you can move to a new temporary buffer by copying the data over.

See Also
cpp_lex_data_temp_read
cpp_lex_data_new_temp

§2.4.5: cpp_lex_data_temp_read

void cpp_lex_data_temp_read(
Cpp_Lex_Data *lex_data,
char *out_buffer
)
Parameters
lex_data
The lex state from which to read the temporary buffer.
out_buffer
The buffer into which the contents of the temporary buffer will be written. The size of the buffer must be at least the size as returned by cpp_lex_data_temp_size.
Description
This call reads the current contents of the temporary buffer.

See Also
cpp_lex_data_temp_size
cpp_lex_data_new_temp

§2.4.6: cpp_lex_data_new_temp

void cpp_lex_data_new_temp(
Cpp_Lex_Data *lex_data,
char *new_buffer
)
Parameters
lex_data
The lex state that will receive the new temporary buffer.
new_buffer
The new temporary buffer that has the same contents as the old temporary buffer.
Description
This call can be used to set a new temporary buffer for the lex state. In cases where you want to discontinue lexing, store the state, and resume later. In such a situation it may be necessary for you to free the temp buffer that was originally used to make the lex state. This call allows you to supply a new temp buffer when you are ready to resume lexing.

However the new buffer needs to have the same contents the old buffer had. To ensure this you have to use cpp_lex_data_temp_size and cpp_lex_data_temp_read to get the relevant contents of the temp buffer before you free it.

See Also
cpp_lex_data_temp_size
cpp_lex_data_temp_read

§2.4.7: cpp_make_token_array

Cpp_Token_Array cpp_make_token_array(
int32_t starting_max
)
Parameters
starting_max
The number of tokens to initialize the array with.
Return
An empty Cpp_Token_Array with memory malloc'd for storing tokens.
Description
This call allocates a Cpp_Token_Array with malloc for use in other convenience functions. Stacks that are not allocated this way should not be used in the convenience functions.


§2.4.8: cpp_free_token_array

void cpp_free_token_array(
Cpp_Token_Array token_array
)
Parameters
token_array
An array previously allocated by cpp_make_token_array
Description
This call frees a Cpp_Token_Array.

See Also
cpp_make_token_array

§2.4.9: cpp_resize_token_array

void cpp_resize_token_array(
Cpp_Token_Array *token_array,
int32_t new_max
)
Parameters
token_array
An array previously allocated by cpp_make_token_array.
new_max
The new maximum size the array should support. If this is not greater than the current size of the array the operation is ignored.
Description
This call allocates a new memory chunk and moves the existing tokens in the array over to the new chunk.

See Also
cpp_make_token_array

§2.4.10: cpp_lex_file

void cpp_lex_file(
char *data,
int32_t size,
Cpp_Token_Array *token_array_out
)
Parameters
data
The file data to be lexed in a single contiguous block.
size
The number of bytes in data.
token_array_out
The token array where the output tokens will be pushed. This token array must be previously allocated with cpp_make_token_array
Description
Lexes an entire file and manages the interaction with the lexer system so that it is quick and convenient to lex files.



Cpp_Token_Array lex_file(char *file_name){
    File_Data file = read_whole_file(file_name);
    
    // This array will be automatically grown if it runs
    // out of memory.
    Cpp_Token_Array array = cpp_make_token_array(100);
    
    cpp_lex_file(file.data, file.size, &array);
    
    return(array);
}
See Also
cpp_make_token_array

§2.5 Lexer Type Descriptions

§2.5.1: Cpp_Token_Type

enum Cpp_Token_Type;
Description
A Cpp_Token_Type classifies a token to make parsing easier. Some types are not actually output by the lexer, but exist because parsers will also make use of token types in their own output.

Values
CPP_TOKEN_JUNK
CPP_TOKEN_COMMENT
CPP_PP_INCLUDE
CPP_PP_DEFINE
CPP_PP_UNDEF
CPP_PP_IF
CPP_PP_IFDEF
CPP_PP_IFNDEF
CPP_PP_ELSE
CPP_PP_ELIF
CPP_PP_ENDIF
CPP_PP_ERROR
CPP_PP_IMPORT
CPP_PP_USING
CPP_PP_LINE
CPP_PP_PRAGMA
CPP_PP_STRINGIFY
CPP_PP_CONCAT
CPP_PP_UNKNOWN
CPP_PP_DEFINED
CPP_PP_INCLUDE_FILE
CPP_PP_ERROR_MESSAGE
CPP_TOKEN_KEY_TYPE
CPP_TOKEN_KEY_MODIFIER
CPP_TOKEN_KEY_QUALIFIER
CPP_TOKEN_KEY_OPERATOR
This type is not stored in token output from the lexer.

CPP_TOKEN_KEY_CONTROL_FLOW
CPP_TOKEN_KEY_CAST
CPP_TOKEN_KEY_TYPE_DECLARATION
CPP_TOKEN_KEY_ACCESS
CPP_TOKEN_KEY_LINKAGE
CPP_TOKEN_KEY_OTHER
CPP_TOKEN_IDENTIFIER
CPP_TOKEN_INTEGER_CONSTANT
CPP_TOKEN_CHARACTER_CONSTANT
CPP_TOKEN_FLOATING_CONSTANT
CPP_TOKEN_STRING_CONSTANT
CPP_TOKEN_BOOLEAN_CONSTANT
CPP_TOKEN_STATIC_ASSERT
CPP_TOKEN_BRACKET_OPEN
CPP_TOKEN_BRACKET_CLOSE
CPP_TOKEN_PARENTHESE_OPEN
CPP_TOKEN_PARENTHESE_CLOSE
CPP_TOKEN_BRACE_OPEN
CPP_TOKEN_BRACE_CLOSE
CPP_TOKEN_SEMICOLON
CPP_TOKEN_ELLIPSIS
CPP_TOKEN_STAR
This is an 'ambiguous' token type because it requires parsing to determine the full nature of the token.

CPP_TOKEN_AMPERSAND
This is an 'ambiguous' token type because it requires parsing to determine the full nature of the token.

CPP_TOKEN_TILDE
This is an 'ambiguous' token type because it requires parsing to determine the full nature of the token.

CPP_TOKEN_PLUS
This is an 'ambiguous' token type because it requires parsing to determine the full nature of the token.

CPP_TOKEN_MINUS
This is an 'ambiguous' token type because it requires parsing to determine the full nature of the token.

CPP_TOKEN_INCREMENT
This is an 'ambiguous' token type because it requires parsing to determine the full nature of the token.

CPP_TOKEN_DECREMENT
This is an 'ambiguous' token type because it requires parsing to determine the full nature of the token.

CPP_TOKEN_SCOPE
CPP_TOKEN_POSTINC
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_POSTDEC
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_FUNC_STYLE_CAST
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_CPP_STYLE_CAST
CPP_TOKEN_CALL
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_INDEX
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_DOT
CPP_TOKEN_ARROW
CPP_TOKEN_PREINC
This token is for parser use, it is not output by the lexer.

CPP_TOKEN_PREDEC
This token is for parser use, it is not output by the lexer.

CPP_TOKEN_POSITIVE
This token is for parser use, it is not output by the lexer.

CPP_TOKEN_NEGAITVE
This token is for parser use, it is not output by the lexer.

CPP_TOKEN_NOT
CPP_TOKEN_BIT_NOT
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_CAST
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_DEREF
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_TYPE_PTR
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_ADDRESS
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_TYPE_REF
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_SIZEOF
CPP_TOKEN_ALIGNOF
CPP_TOKEN_DECLTYPE
CPP_TOKEN_TYPEID
CPP_TOKEN_NEW
CPP_TOKEN_DELETE
CPP_TOKEN_NEW_ARRAY
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_DELETE_ARRAY
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_PTRDOT
CPP_TOKEN_PTRARROW
CPP_TOKEN_MUL
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_DIV
CPP_TOKEN_MOD
CPP_TOKEN_ADD
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_SUB
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_LSHIFT
CPP_TOKEN_RSHIFT
CPP_TOKEN_LESS
CPP_TOKEN_GRTR
CPP_TOKEN_GRTREQ
CPP_TOKEN_LESSEQ
CPP_TOKEN_EQEQ
CPP_TOKEN_NOTEQ
CPP_TOKEN_BIT_AND
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_BIT_XOR
CPP_TOKEN_BIT_OR
CPP_TOKEN_AND
CPP_TOKEN_OR
CPP_TOKEN_TERNARY_QMARK
CPP_TOKEN_COLON
CPP_TOKEN_THROW
CPP_TOKEN_EQ
CPP_TOKEN_ADDEQ
CPP_TOKEN_SUBEQ
CPP_TOKEN_MULEQ
CPP_TOKEN_DIVEQ
CPP_TOKEN_MODEQ
CPP_TOKEN_LSHIFTEQ
CPP_TOKEN_RSHIFTEQ
CPP_TOKEN_ANDEQ
CPP_TOKEN_OREQ
CPP_TOKEN_XOREQ
CPP_TOKEN_COMMA
CPP_TOKEN_EOF
This type is for parser use, it is not output by the lexer.

CPP_TOKEN_TYPE_COUNT

§2.5.2: Cpp_Token

struct Cpp_Token {
Cpp_Token_Type type;
int32_t start;
int32_t size;
uint16_t state_flags;
uint16_t flags;
};
Description
Cpp_Token represents a single lexed token. It is the primary output of the lexing system.

Fields
type
The type field indicates the type of the token. All tokens have a type no matter the circumstances.

start
The start field indicates the index of the first character of this token's lexeme.

size
The size field indicates the number of bytes in this token's lexeme.

state_flags
The state_flags should not be used outside of the lexer's implementation.

flags
The flags field contains extra useful information about the token.

See Also
Cpp_Token_Flag

§2.5.3: Cpp_Token_Flag

enum Cpp_Token_Flag;
Description
The Cpp_Token_Flags are used to mark up tokens with additional information.

Values
CPP_TFLAG_PP_DIRECTIVE = 0x1
Indicates that the token is a preprocessor directive.

CPP_TFLAG_PP_BODY = 0x2
Indicates that the token is on the line of a preprocessor directive.

CPP_TFLAG_MULTILINE = 0x4
Indicates that the token spans across multiple lines. This can show up on line comments and string literals with back slash line continuation.

CPP_TFLAG_IS_OPERATOR = 0x8
Indicates that the token is some kind of operator or punctuation like braces.

CPP_TFLAG_IS_KEYWORD = 0x10
Indicates that the token is a keyword.


§2.5.4: Cpp_Token_Array

struct Cpp_Token_Array {
Cpp_Token * tokens;
int32_t count;
int32_t max_count;
};
Description
Cpp_Token_Array is used to bundle together the common elements of a growing array of Cpp_Tokens. To initialize it the tokens field should point to a block of memory with a size equal to max_count*sizeof(Cpp_Token) and the count should be initialized to zero.

Fields
tokens
The tokens field points to the memory used to store the array of tokens.

count
The count field counts how many tokens in the array are currently used.

max_count
The max_count field specifies the maximum size the count field may grow to before the tokens array is out of space.


§2.5.5: Cpp_Get_Token_Result

struct Cpp_Get_Token_Result {
int32_t token_index;
int32_t in_whitespace;
};
Description
Cpp_Get_Token_Result is the return result of the cpp_get_token call.

Fields
token_index
The token_index field indicates which token answers the query. To get the token from the source array

array.tokens[result.token_index]
in_whitespace
The in_whitespace field is true when the query position was actually in whitespace after the result token.

See Also
cpp_get_token

§2.5.6: Cpp_Lex_Data

struct Cpp_Lex_Data { /* non-public internals */ } ;
Description
Cpp_Lex_Data represents the state of the lexer so that the system may be resumable and the user can manage the lexer state and decide when to resume lexing with it. To create a new lexer state that has not begun doing any lexing work call cpp_lex_data_init.

The internals of the lex state should not be treated as a part of the public API.

See Also
cpp_lex_data_init

§2.5.7: Cpp_Lex_Result

enum Cpp_Lex_Result;
Description
Cpp_Lex_Result is returned from the lexing engine to indicate why it stopped lexing.

Values
LexResult_Finished
This indicates that the system got to the end of the file and will not accept more input.

LexResult_NeedChunk
This indicates that the system got to the end of an input chunk and is ready to receive the next input chunk.

LexResult_NeedTokenMemory
This indicates that the output array ran out of space to store tokens and needs to be replaced or expanded before continuing.

LexResult_HitTokenLimit
This indicates that the maximum number of output tokens as specified by the user was hit.