fix the fucking stupid merge conflicts that are totally stupid and avoidable

master
Allen Webster 2016-09-23 16:29:43 -04:00
commit ab207cf8f0
9 changed files with 137 additions and 81 deletions

View File

@ -1,7 +1,7 @@
<html lang="en-US"><head><title>4coder API Docs</title><style>body { background: #FAFAFA; color: #0D0D0D; }h1,h2,h3,h4 { color: #309030; margin: 0; }h2 { margin-top: 6mm; }h3 { margin-top: 5mm; margin-bottom: 5mm; }h4 { font-size: 1.1em; }a { color: #309030; text-decoration: none; }a:visited { color: #A0C050; }a:hover { background: #E0FFD0; }ul { list-style: none; padding: 0; margin: 0; }</style></head> <html lang="en-US"><head><title>4coder API Docs</title><style>body { background: #FAFAFA; color: #0D0D0D; }h1,h2,h3,h4 { color: #309030; margin: 0; }h2 { margin-top: 6mm; }h3 { margin-top: 5mm; margin-bottom: 5mm; }h4 { font-size: 1.1em; }a { color: #309030; text-decoration: none; }a:visited { color: #A0C050; }a:hover { background: #E0FFD0; }ul { list-style: none; padding: 0; margin: 0; }</style></head>
<body><div style='font-family:Arial; margin: 0 auto; width: 800px; text-align: justify; line-height: 1.25;'><h1 style='margin-top: 5mm; margin-bottom: 5mm;'>4cpp Lexing Library</h1><h3 style='margin:0;'>Table of Contents</h3><ul><li><a href='#section_introduction'>&sect;1 Introduction</a></li><li><a href='#section_lexer_library'>&sect;2 Lexer Library</a></li></ul> <body><div style='font-family:Arial; margin: 0 auto; width: 800px; text-align: justify; line-height: 1.25;'><h1 style='margin-top: 5mm; margin-bottom: 5mm;'>4cpp Lexing Library</h1><h3 style='margin:0;'>Table of Contents</h3><ul><li><a href='#section_introduction'>&sect;1 Introduction</a></li><li><a href='#section_lexer_library'>&sect;2 Lexer Library</a></li></ul>
<h2 id='section_introduction'>&sect;1 Introduction</h2><div><p>This is the documentation for the 4cpp lexer version 1.1. The documentation is the newest piece of this lexer project so it may still have problems. What is here should be correct and mostly complete.</p><p>If you have questions or discover errors please contact <span style='font-family: "Courier New", Courier, monospace; text-align: left;'>editor@4coder.net</span> or to get help from community members you can post on the 4coder forums hosted on handmade.network at <span style='font-family: "Courier New", Courier, monospace; text-align: left;'>4coder.handmade.network</span></p></div> <h2 id='section_introduction'>&sect;1 Introduction</h2><div><p>This is the documentation for the 4cpp lexer version 1.1. The documentation is the newest piece of this lexer project so it may still have problems. What is here should be correct and mostly complete.</p><p>If you have questions or discover errors please contact <span style='font-family: "Courier New", Courier, monospace; text-align: left;'>editor@4coder.net</span> or to get help from community members you can post on the 4coder forums hosted on handmade.network at <span style='font-family: "Courier New", Courier, monospace; text-align: left;'>4coder.handmade.network</span></p></div>
<h2 id='section_lexer_library'>&sect;2 Lexer Library</h2><h3>&sect;2.1 Lexer Intro</h3><div>The 4cpp lexer system provides a polished, fast, flexible system that takes in C/C++ and outputs a tokenization of the text data. There are two API levels. One level is setup to let you easily get a tokenization of the file. This level manages memory for you with malloc to make it as fast as possible to start getting your tokens. The second level enables deep integration by allowing control over allocation, data chunking, and output rate control.<br><br>To use the quick setup API you simply include 4cpp_lexer.h and read the documentation at <a href='#cpp_lex_file_doc'>cpp_lex_file</a>.<br><br>To use the the fancier API include 4cpp_lexer.h and read the documentation at <a href='#cpp_lex_step_doc'>cpp_lex_step</a>. If you want to be absolutely sure you are not including malloc into your program you can define FCPP_FORBID_MALLOC before the include and the "step" API will continue to work.<br><br>There are a few more features in 4cpp that are not documented yet. You are free to try to use these, but I am not totally sure they are ready yet, and when they are they will be documented.</div><h3>&sect;2.2 Lexer Function List</h3><ul><li><a href='#cpp_get_token_doc'>cpp_get_token</a></li><li><a href='#cpp_lex_step_doc'>cpp_lex_step</a></li><li><a href='#cpp_lex_data_init_doc'>cpp_lex_data_init</a></li><li><a href='#cpp_lex_data_temp_size_doc'>cpp_lex_data_temp_size</a></li><li><a href='#cpp_lex_data_temp_read_doc'>cpp_lex_data_temp_read</a></li><li><a href='#cpp_lex_data_new_temp_DEP_doc'>cpp_lex_data_new_temp_DEP</a></li><li><a href='#cpp_get_relex_range_doc'>cpp_get_relex_range</a></li><li><a href='#cpp_relex_init_doc'>cpp_relex_init</a></li><li><a href='#cpp_relex_start_position_doc'>cpp_relex_start_position</a></li><li><a href='#cpp_relex_declare_first_chunk_position_doc'>cpp_relex_declare_first_chunk_position</a></li><li><a href='#cpp_relex_is_start_chunk_doc'>cpp_relex_is_start_chunk</a></li><li><a href='#cpp_relex_step_doc'>cpp_relex_step</a></li><li><a href='#cpp_relex_get_new_count_doc'>cpp_relex_get_new_count</a></li><li><a href='#cpp_relex_complete_doc'>cpp_relex_complete</a></li><li><a href='#cpp_relex_abort_doc'>cpp_relex_abort</a></li><li><a href='#cpp_make_token_array_doc'>cpp_make_token_array</a></li><li><a href='#cpp_free_token_array_doc'>cpp_free_token_array</a></li><li><a href='#cpp_resize_token_array_doc'>cpp_resize_token_array</a></li><li><a href='#cpp_lex_file_doc'>cpp_lex_file</a></li></ul><h3>&sect;2.3 Lexer Types List</h3><ul><li><a href='#Cpp_Token_Type_doc'>Cpp_Token_Type</a></li><li><a href='#Cpp_Token_doc'>Cpp_Token</a></li><li><a href='#Cpp_Token_Flag_doc'>Cpp_Token_Flag</a></li><li><a href='#Cpp_Token_Array_doc'>Cpp_Token_Array</a></li><li><a href='#Cpp_Get_Token_Result_doc'>Cpp_Get_Token_Result</a></li><li><a href='#Cpp_Relex_Range_doc'>Cpp_Relex_Range</a></li><li><a href='#Cpp_Lex_Data_doc'>Cpp_Lex_Data</a></li><li><a href='#Cpp_Lex_Result_doc'>Cpp_Lex_Result</a></li><li><a href='#Cpp_Relex_Data_doc'>Cpp_Relex_Data</a></li></ul><h3>&sect;2.4 Lexer Function Descriptions</h3><div id='cpp_get_token_doc' style='margin-bottom: 1cm;'><h4>&sect;2.4.1: cpp_get_token</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>Cpp_Get_Token_Result cpp_get_token(<div style='margin-left: 4mm;'>Cpp_Token_Array *token_array_in,<br>int32_t pos<br></div>)</div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Parameters</i></b></div><div><div style='font-weight: 600;'>token_array</div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The array of tokens from which to get a token.</div></div></div><div><div style='font-weight: 600;'>pos</div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The position, measured in bytes, to get the token for.</div></div></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Return</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>A Cpp_Get_Token_Result struct is returned containing the index <h2 id='section_lexer_library'>&sect;2 Lexer Library</h2><h3>&sect;2.1 Lexer Intro</h3><div>The 4cpp lexer system provides a polished, fast, flexible system that takes in C/C++ and outputs a tokenization of the text data. There are two API levels. One level is setup to let you easily get a tokenization of the file. This level manages memory for you with malloc to make it as fast as possible to start getting your tokens. The second level enables deep integration by allowing control over allocation, data chunking, and output rate control.<br><br>To use the quick setup API you simply include 4cpp_lexer.h and read the documentation at <a href='#cpp_lex_file_doc'>cpp_lex_file</a>.<br><br>To use the the fancier API include 4cpp_lexer.h and read the documentation at <a href='#cpp_lex_step_doc'>cpp_lex_step</a>. If you want to be absolutely sure you are not including malloc into your program you can define FCPP_FORBID_MALLOC before the include and the "step" API will continue to work.<br><br>There are a few more features in 4cpp that are not documented yet. You are free to try to use these, but I am not totally sure they are ready yet, and when they are they will be documented.</div><h3>&sect;2.2 Lexer Function List</h3><ul><li><a href='#cpp_get_token_doc'>cpp_get_token</a></li><li><a href='#cpp_lex_step_doc'>cpp_lex_step</a></li><li><a href='#cpp_lex_data_init_doc'>cpp_lex_data_init</a></li><li><a href='#cpp_lex_data_temp_size_doc'>cpp_lex_data_temp_size</a></li><li><a href='#cpp_lex_data_temp_read_doc'>cpp_lex_data_temp_read</a></li><li><a href='#cpp_lex_data_new_temp_DEP_doc'>cpp_lex_data_new_temp_DEP</a></li><li><a href='#cpp_get_relex_range_doc'>cpp_get_relex_range</a></li><li><a href='#cpp_relex_init_doc'>cpp_relex_init</a></li><li><a href='#cpp_relex_start_position_doc'>cpp_relex_start_position</a></li><li><a href='#cpp_relex_declare_first_chunk_position_doc'>cpp_relex_declare_first_chunk_position</a></li><li><a href='#cpp_relex_is_start_chunk_doc'>cpp_relex_is_start_chunk</a></li><li><a href='#cpp_relex_step_doc'>cpp_relex_step</a></li><li><a href='#cpp_relex_get_new_count_doc'>cpp_relex_get_new_count</a></li><li><a href='#cpp_relex_complete_doc'>cpp_relex_complete</a></li><li><a href='#cpp_relex_abort_doc'>cpp_relex_abort</a></li><li><a href='#cpp_make_token_array_doc'>cpp_make_token_array</a></li><li><a href='#cpp_free_token_array_doc'>cpp_free_token_array</a></li><li><a href='#cpp_resize_token_array_doc'>cpp_resize_token_array</a></li><li><a href='#cpp_lex_file_doc'>cpp_lex_file</a></li></ul><h3>&sect;2.3 Lexer Types List</h3><ul><li><a href='#Cpp_Token_Type_doc'>Cpp_Token_Type</a></li><li><a href='#Cpp_Token_doc'>Cpp_Token</a></li><li><a href='#Cpp_Token_Flag_doc'>Cpp_Token_Flag</a></li><li><a href='#Cpp_Token_Array_doc'>Cpp_Token_Array</a></li><li><a href='#Cpp_Get_Token_Result_doc'>Cpp_Get_Token_Result</a></li><li><a href='#Cpp_Relex_Range_doc'>Cpp_Relex_Range</a></li><li><a href='#Cpp_Lex_Data_doc'>Cpp_Lex_Data</a></li><li><a href='#Cpp_Lex_Result_doc'>Cpp_Lex_Result</a></li><li><a href='#Cpp_Relex_Data_doc'>Cpp_Relex_Data</a></li></ul><h3>&sect;2.4 Lexer Function Descriptions</h3><div id='cpp_get_token_doc' style='margin-bottom: 1cm;'><h4>&sect;2.4.1: cpp_get_token</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>Cpp_Get_Token_Result cpp_get_token(<div style='margin-left: 4mm;'>Cpp_Token_Array array,<br>int32_t pos<br></div>)</div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Parameters</i></b></div><div><div style='font-weight: 600;'>array</div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The array of tokens from which to get a token.</div></div></div><div><div style='font-weight: 600;'>pos</div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The position, measured in bytes, to get the token for.</div></div></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Return</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>A Cpp_Get_Token_Result struct is returned containing the index
of a token and a flag indicating whether the pos is contained in the token of a token and a flag indicating whether the pos is contained in the token
or in whitespace after the token.</div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>This call performs a binary search over all of the tokens looking or in whitespace after the token.</div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>This call performs a binary search over all of the tokens looking
for the token that contains the specified position. If the position for the token that contains the specified position. If the position
@ -122,9 +122,11 @@ It is the primary output of the lexing system.<br><br></div><div style='margin-t
of a growing array of Cpp_Tokens. To initialize it the tokens field should of a growing array of Cpp_Tokens. To initialize it the tokens field should
point to a block of memory with a size equal to max_count*sizeof(Cpp_Token) point to a block of memory with a size equal to max_count*sizeof(Cpp_Token)
and the count should be initialized to zero.<br><br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Fields</i></b></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>tokens</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The tokens field points to the memory used to store the array of tokens.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>count</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The count field counts how many tokens in the array are currently used.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>max_count</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The max_count field specifies the maximum size the count field may grow to before and the count should be initialized to zero.<br><br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Fields</i></b></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>tokens</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The tokens field points to the memory used to store the array of tokens.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>count</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The count field counts how many tokens in the array are currently used.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>max_count</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The max_count field specifies the maximum size the count field may grow to before
the tokens array is out of space.<br><br></div></div></div></div><hr><div id='Cpp_Get_Token_Result_doc' style='margin-bottom: 1cm;'><h4>&sect;2.5.5: Cpp_Get_Token_Result</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>struct Cpp_Get_Token_Result {<br><div style='margin-left: 8mm;'>int32_t token_index;<br>int32_t in_whitespace;<br></div>};<br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>Cpp_Get_Token_Result is the return result of the cpp_get_token call.<br><br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Fields</i></b></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>token_index</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The token_index field indicates which token answers the query. To get the token from the tokens array is out of space.<br><br></div></div></div></div><hr><div id='Cpp_Get_Token_Result_doc' style='margin-bottom: 1cm;'><h4>&sect;2.5.5: Cpp_Get_Token_Result</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>struct Cpp_Get_Token_Result {<br><div style='margin-left: 8mm;'>int32_t token_index;<br>int32_t in_whitespace;<br>int32_t token_start;<br>int32_t token_end;<br></div>};<br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>Cpp_Get_Token_Result is the return result of the cpp_get_token call.<br><br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Fields</i></b></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>token_index</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The token_index field indicates which token answers the query. To get the token from
the source array <br><br><div style='font-family: "Courier New", Courier, monospace; text-align: left;margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #EFEFDF; padding: 0.25em;'>array.tokens[result.token_index]<br></div></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>in_whitespace</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The in_whitespace field is true when the query position was actually in whitespace the source array <br><br><div style='font-family: "Courier New", Courier, monospace; text-align: left;margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #EFEFDF; padding: 0.25em;'>array.tokens[result.token_index]<br></div></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>in_whitespace</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The in_whitespace field is true when the query position was actually in whitespace
after the result token.<br><br></div></div></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>See Also</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'><a href='#cpp_get_token_doc'>cpp_get_token</a></div></div><hr><div id='Cpp_Relex_Range_doc' style='margin-bottom: 1cm;'><h4>&sect;2.5.6: Cpp_Relex_Range</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>struct Cpp_Relex_Range {<br><div style='margin-left: 8mm;'>int32_t start_token_index;<br>int32_t end_token_index;<br></div>};<br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>Cpp_Relex_Range is the return result of the cpp_get_relex_range call.<br><br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Fields</i></b></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>start_token_index</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The index of the first token in the unedited array that needs to be relexed.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>end_token_index</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The index of the first token in the unedited array after the edited range after the result token.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>token_start</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>If the token_index refers to an actual token, this is the start value of the token.
Otherwise this is zero.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>token_end</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>If the token_index refers to an actual token, this is the start+size value of the token.
Otherwise this is zero.<br><br></div></div></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>See Also</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'><a href='#cpp_get_token_doc'>cpp_get_token</a></div></div><hr><div id='Cpp_Relex_Range_doc' style='margin-bottom: 1cm;'><h4>&sect;2.5.6: Cpp_Relex_Range</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>struct Cpp_Relex_Range {<br><div style='margin-left: 8mm;'>int32_t start_token_index;<br>int32_t end_token_index;<br></div>};<br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>Cpp_Relex_Range is the return result of the cpp_get_relex_range call.<br><br></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Fields</i></b></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>start_token_index</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The index of the first token in the unedited array that needs to be relexed.<br><br></div></div></div><div><div style='font-family: "Courier New", Courier, monospace; text-align: left;'><span style='font-weight: 600;'>end_token_index</span></div><div style='margin-bottom: 6mm;'><div style='margin-left: 5mm; margin-right: 5mm;'>The index of the first token in the unedited array after the edited range
that may not need to be relexed. Sometimes a relex operation has to lex past this that may not need to be relexed. Sometimes a relex operation has to lex past this
position to find a token that is not effected by the edit.<br><br></div></div></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>See Also</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'><a href='#cpp_get_relex_range_doc'>cpp_get_relex_range</a></div></div><hr><div id='Cpp_Lex_Data_doc' style='margin-bottom: 1cm;'><h4>&sect;2.5.7: Cpp_Lex_Data</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>struct Cpp_Lex_Data { /* non-public internals */ } ;</div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>Cpp_Lex_Data represents the state of the lexer so that the system may be resumable position to find a token that is not effected by the edit.<br><br></div></div></div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>See Also</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'><a href='#cpp_get_relex_range_doc'>cpp_get_relex_range</a></div></div><hr><div id='Cpp_Lex_Data_doc' style='margin-bottom: 1cm;'><h4>&sect;2.5.7: Cpp_Lex_Data</h4><div style='font-family: "Courier New", Courier, monospace; text-align: left; margin-top: 3mm; margin-bottom: 3mm; font-size: .95em; background: #DFDFDF; padding: 0.25em;'>struct Cpp_Lex_Data { /* non-public internals */ } ;</div><div style='margin-top: 3mm; margin-bottom: 3mm; color: #309030;'><b><i>Description</i></b></div><div style='margin-left: 5mm; margin-right: 5mm;'>Cpp_Lex_Data represents the state of the lexer so that the system may be resumable
and the user can manage the lexer state and decide when to resume lexing with it. To create and the user can manage the lexer state and decide when to resume lexing with it. To create

View File

@ -390,6 +390,7 @@ default_keys(Bind_Helper *context){
bind(context, 'q', MDFR_CTRL, query_replace); bind(context, 'q', MDFR_CTRL, query_replace);
bind(context, 'r', MDFR_CTRL, reverse_search); bind(context, 'r', MDFR_CTRL, reverse_search);
bind(context, 's', MDFR_CTRL, cmdid_save); bind(context, 's', MDFR_CTRL, cmdid_save);
bind(context, 'T', MDFR_CTRL, list_all_locations_of_identifier);
bind(context, 'u', MDFR_CTRL, to_uppercase); bind(context, 'u', MDFR_CTRL, to_uppercase);
bind(context, 'U', MDFR_CTRL, rewrite_as_single_caps); bind(context, 'U', MDFR_CTRL, rewrite_as_single_caps);
bind(context, 'v', MDFR_CTRL, paste_and_indent); bind(context, 'v', MDFR_CTRL, paste_and_indent);

View File

@ -677,7 +677,7 @@ static Cpp_Token*
get_first_token_at_line(Application_Links *app, Buffer_Summary *buffer, Cpp_Token_Array tokens, int32_t line, get_first_token_at_line(Application_Links *app, Buffer_Summary *buffer, Cpp_Token_Array tokens, int32_t line,
int32_t *line_start_out = 0){ int32_t *line_start_out = 0){
int32_t line_start = buffer_get_line_start(app, buffer, line); int32_t line_start = buffer_get_line_start(app, buffer, line);
Cpp_Get_Token_Result get_token = cpp_get_token(&tokens, line_start); Cpp_Get_Token_Result get_token = cpp_get_token(tokens, line_start);
if (get_token.in_whitespace){ if (get_token.in_whitespace){
get_token.token_index += 1; get_token.token_index += 1;
@ -1769,7 +1769,7 @@ buffer_seek_alphanumeric_or_camel_left(Application_Links *app, Buffer_Summary *b
static int32_t static int32_t
seek_token_left(Cpp_Token_Array *tokens, int32_t pos){ seek_token_left(Cpp_Token_Array *tokens, int32_t pos){
Cpp_Get_Token_Result get = cpp_get_token(tokens, pos); Cpp_Get_Token_Result get = cpp_get_token(*tokens, pos);
if (get.token_index == -1){ if (get.token_index == -1){
get.token_index = 0; get.token_index = 0;
} }
@ -1784,7 +1784,7 @@ seek_token_left(Cpp_Token_Array *tokens, int32_t pos){
static int32_t static int32_t
seek_token_right(Cpp_Token_Array *tokens, int32_t pos){ seek_token_right(Cpp_Token_Array *tokens, int32_t pos){
Cpp_Get_Token_Result get = cpp_get_token(tokens, pos); Cpp_Get_Token_Result get = cpp_get_token(*tokens, pos);
if (get.in_whitespace){ if (get.in_whitespace){
++get.token_index; ++get.token_index;
} }
@ -2707,22 +2707,24 @@ CUSTOM_COMMAND_SIG(exit_4coder){
#include "4coder_jump_parsing.cpp" #include "4coder_jump_parsing.cpp"
static void static void
generic_search_all_buffers(Application_Links *app, General_Memory *general, Partition *part, get_search_all_string(Application_Links *app, Query_Bar *bar){
uint32_t match_flags){
Query_Bar string;
char string_space[1024]; char string_space[1024];
string.prompt = make_lit_string("List Locations For: "); bar->prompt = make_lit_string("List Locations For: ");
string.string = make_fixed_width_string(string_space); bar->string = make_fixed_width_string(string_space);
if (!query_user_string(app, &string)) return; if (!query_user_string(app, bar)){
if (string.string.size == 0) return; bar->string.size = 0;
}
}
static void
generic_search_all_buffers(Application_Links *app, General_Memory *general, Partition *part,
String string, uint32_t match_flags){
Search_Set set = {0}; Search_Set set = {0};
Search_Iter iter = {0}; Search_Iter iter = {0};
search_iter_init(general, &iter, string.string.size); search_iter_init(general, &iter, string.size);
copy_ss(&iter.word, string.string); copy_ss(&iter.word, string);
int32_t buffer_count = get_buffer_count(app); int32_t buffer_count = get_buffer_count(app);
search_set_init(general, &set, buffer_count); search_set_init(general, &set, buffer_count);
@ -2843,19 +2845,59 @@ generic_search_all_buffers(Application_Links *app, General_Memory *general, Part
} }
CUSTOM_COMMAND_SIG(list_all_locations){ CUSTOM_COMMAND_SIG(list_all_locations){
generic_search_all_buffers(app, &global_general, &global_part, SearchFlag_MatchWholeWord); Query_Bar bar;
get_search_all_string(app, &bar);
if (bar.string.size == 0) return;
generic_search_all_buffers(app, &global_general, &global_part,
bar.string, SearchFlag_MatchWholeWord);
} }
CUSTOM_COMMAND_SIG(list_all_substring_locations){ CUSTOM_COMMAND_SIG(list_all_substring_locations){
generic_search_all_buffers(app, &global_general, &global_part, SearchFlag_MatchSubstring); Query_Bar bar;
get_search_all_string(app, &bar);
if (bar.string.size == 0) return;
generic_search_all_buffers(app, &global_general, &global_part,
bar.string, SearchFlag_MatchSubstring);
} }
CUSTOM_COMMAND_SIG(list_all_locations_case_insensitive){ CUSTOM_COMMAND_SIG(list_all_locations_case_insensitive){
generic_search_all_buffers(app, &global_general, &global_part, SearchFlag_CaseInsensitive | SearchFlag_MatchWholeWord); Query_Bar bar;
get_search_all_string(app, &bar);
if (bar.string.size == 0) return;
generic_search_all_buffers(app, &global_general, &global_part,
bar.string, SearchFlag_CaseInsensitive | SearchFlag_MatchWholeWord);
} }
CUSTOM_COMMAND_SIG(list_all_substring_locations_case_insensitive){ CUSTOM_COMMAND_SIG(list_all_substring_locations_case_insensitive){
generic_search_all_buffers(app, &global_general, &global_part, SearchFlag_CaseInsensitive | SearchFlag_MatchSubstring); Query_Bar bar;
get_search_all_string(app, &bar);
if (bar.string.size == 0) return;
generic_search_all_buffers(app, &global_general, &global_part,
bar.string, SearchFlag_CaseInsensitive | SearchFlag_MatchSubstring);
}
CUSTOM_COMMAND_SIG(list_all_locations_of_identifier){
View_Summary view = get_active_view(app, AccessProtected);
Buffer_Summary buffer = get_buffer(app, view.buffer_id, AccessProtected);
Cpp_Get_Token_Result get_result = {0};
bool32 success = buffer_get_token_index(app, &buffer, view.cursor.pos, &get_result);
if (success && !get_result.in_whitespace){
char space[128];
int32_t size = get_result.token_end - get_result.token_start;
if (size > 0 && size <= sizeof(space)){
success = buffer_read_range(app, &buffer, get_result.token_start,
get_result.token_end, space);
if (success){
String str = make_string(space, size);
exec_command(app, change_active_panel);
generic_search_all_buffers(app, &global_general, &global_part,
str, SearchFlag_MatchWholeWord);
}
}
}
} }
struct Word_Complete_State{ struct Word_Complete_State{

View File

@ -151,8 +151,8 @@ static String_And_Flag keywords[] = {
FCPP_LINK Cpp_Get_Token_Result FCPP_LINK Cpp_Get_Token_Result
cpp_get_token(Cpp_Token_Array *token_array_in, int32_t pos)/* cpp_get_token(Cpp_Token_Array array, int32_t pos)/*
DOC_PARAM(token_array, The array of tokens from which to get a token.) DOC_PARAM(array, The array of tokens from which to get a token.)
DOC_PARAM(pos, The position, measured in bytes, to get the token for.) DOC_PARAM(pos, The position, measured in bytes, to get the token for.)
DOC_RETURN(A Cpp_Get_Token_Result struct is returned containing the index DOC_RETURN(A Cpp_Get_Token_Result struct is returned containing the index
of a token and a flag indicating whether the pos is contained in the token of a token and a flag indicating whether the pos is contained in the token
@ -167,10 +167,10 @@ index can be -1 if the position is before the first token.)
DOC_SEE(Cpp_Get_Token_Result) DOC_SEE(Cpp_Get_Token_Result)
*/{ */{
Cpp_Get_Token_Result result = {}; Cpp_Get_Token_Result result = {};
Cpp_Token *token_array = token_array_in->tokens; Cpp_Token *token_array = array.tokens;
Cpp_Token *token = 0; Cpp_Token *token = 0;
int32_t first = 0; int32_t first = 0;
int32_t count = token_array_in->count; int32_t count = array.count;
int32_t last = count; int32_t last = count;
int32_t this_start = 0, next_start = 0; int32_t this_start = 0, next_start = 0;
@ -217,6 +217,12 @@ DOC_SEE(Cpp_Get_Token_Result)
result.in_whitespace = 1; result.in_whitespace = 1;
} }
if (result.token_index >= 0 && result.token_index < count){
token = array.tokens + result.token_index;
result.token_start = token->start;
result.token_end = token->start + token->size;
}
return(result); return(result);
} }
@ -1130,13 +1136,13 @@ The start and end points are based on the edited region of the file before the e
Cpp_Relex_Range range = {0}; Cpp_Relex_Range range = {0};
Cpp_Get_Token_Result get_result = {0}; Cpp_Get_Token_Result get_result = {0};
get_result = cpp_get_token(array, start_pos); get_result = cpp_get_token(*array, start_pos);
range.start_token_index = get_result.token_index-1; range.start_token_index = get_result.token_index-1;
if (range.start_token_index < 0){ if (range.start_token_index < 0){
range.start_token_index = 0; range.start_token_index = 0;
} }
get_result = cpp_get_token(array, end_pos); get_result = cpp_get_token(*array, end_pos);
range.end_token_index = get_result.token_index; range.end_token_index = get_result.token_index;
if (end_pos > array->tokens[range.end_token_index].start){ if (end_pos > array->tokens[range.end_token_index].start){
++range.end_token_index; ++range.end_token_index;

View File

@ -303,6 +303,14 @@ struct Cpp_Get_Token_Result{
/* DOC(The in_whitespace field is true when the query position was actually in whitespace /* DOC(The in_whitespace field is true when the query position was actually in whitespace
after the result token.) */ after the result token.) */
int32_t in_whitespace; int32_t in_whitespace;
/* DOC(If the token_index refers to an actual token, this is the start value of the token.
Otherwise this is zero.) */
int32_t token_start;
/* DOC(If the token_index refers to an actual token, this is the start+size value of the token.
Otherwise this is zero.) */
int32_t token_end;
}; };
/* DOC(Cpp_Relex_Range is the return result of the cpp_get_relex_range call.) /* DOC(Cpp_Relex_Range is the return result of the cpp_get_relex_range call.)

View File

@ -899,7 +899,7 @@ DOC_SEE(cpp_get_token)
if (file && token_array.tokens && file->state.tokens_complete){ if (file && token_array.tokens && file->state.tokens_complete){
result = 1; result = 1;
Cpp_Get_Token_Result get = {0}; Cpp_Get_Token_Result get = {0};
get = cpp_get_token(&token_array, pos); get = cpp_get_token(token_array, pos);
*get_result = get; *get_result = get;
} }

View File

@ -297,14 +297,12 @@ view_file_height(View *view){
inline i32 inline i32
view_get_cursor_pos(View *view){ view_get_cursor_pos(View *view){
i32 result = 0; i32 result = 0;
if (view->file_data.show_temp_highlight){ if (view->file_data.show_temp_highlight){
result = view->file_data.temp_highlight.pos; result = view->file_data.temp_highlight.pos;
} }
else if (view->edit_pos){ else if (view->edit_pos){
result = view->edit_pos->cursor.pos; result = view->edit_pos->cursor.pos;
} }
return(result); return(result);
} }
@ -499,13 +497,6 @@ view_compute_cursor_from_xy(View *view, f32 seek_x, f32 seek_y){
return(result); return(result);
} }
inline i32
view_wrapped_line_span(f32 line_width, f32 max_width){
i32 line_count = CEIL32(line_width / max_width);
if (line_count == 0) line_count = 1;
return(line_count);
}
inline i32 inline i32
view_compute_max_target_y(i32 lowest_line, i32 line_height, f32 view_height){ view_compute_max_target_y(i32 lowest_line, i32 line_height, f32 view_height){
f32 max_target_y = ((lowest_line+.5f)*line_height) - view_height*.5f; f32 max_target_y = ((lowest_line+.5f)*line_height) - view_height*.5f;
@ -881,14 +872,11 @@ file_grow_starts_as_needed(General_Memory *general, Buffer_Type *buffer, i32 add
i32 *new_lines = (i32*)general_memory_reallocate( i32 *new_lines = (i32*)general_memory_reallocate(
general, buffer->line_starts, general, buffer->line_starts,
sizeof(i32)*count, sizeof(i32)*max); sizeof(i32)*count, sizeof(f32)*max);
if (new_lines){
buffer->line_starts = new_lines;
buffer->line_max = max;
}
if (new_lines){ if (new_lines){
result = GROW_SUCCESS; result = GROW_SUCCESS;
buffer->line_starts = new_lines;
} }
else{ else{
result = GROW_FAILED; result = GROW_FAILED;
@ -1314,7 +1302,7 @@ file_relex_parallel(System_Functions *system,
if (!inline_lex){ if (!inline_lex){
Cpp_Token_Array *array = &file->state.token_array; Cpp_Token_Array *array = &file->state.token_array;
Cpp_Get_Token_Result get_token_result = cpp_get_token(array, end_i); Cpp_Get_Token_Result get_token_result = cpp_get_token(*array, end_i);
i32 end_token_i = get_token_result.token_index; i32 end_token_i = get_token_result.token_index;
if (end_token_i < 0){ if (end_token_i < 0){
@ -1929,7 +1917,7 @@ file_edit_cursor_fix(System_Functions *system, Models *models,
i32 cursor_count = 0; i32 cursor_count = 0;
View *view = 0; View *view = 0;
Panel *panel, *used_panels = &layout->used_sentinel; Panel *panel = 0, *used_panels = &layout->used_sentinel;
for (dll_items(panel, used_panels)){ for (dll_items(panel, used_panels)){
view = panel->view; view = panel->view;
if (view->file_data.file == file){ if (view->file_data.file == file){
@ -2047,7 +2035,6 @@ file_do_single_edit(System_Functions *system,
i32 new_line_count = buffer_count_newlines(&file->state.buffer, start, start+str_len); i32 new_line_count = buffer_count_newlines(&file->state.buffer, start, start+str_len);
i32 line_shift = new_line_count - replaced_line_count; i32 line_shift = new_line_count - replaced_line_count;
Render_Font *font = get_font_info(models->font_set, file->settings.font_id)->font; Render_Font *font = get_font_info(models->font_set, file->settings.font_id)->font;
file_grow_starts_as_needed(general, buffer, line_shift); file_grow_starts_as_needed(general, buffer, line_shift);
buffer_remeasure_starts(buffer, line_start, line_end, line_shift, shift_amount); buffer_remeasure_starts(buffer, line_start, line_end, line_shift, shift_amount);
@ -2140,7 +2127,6 @@ file_do_batch_edit(System_Functions *system, Models *models, Editing_File *file,
Buffer_Edit *first_edit = batch; Buffer_Edit *first_edit = batch;
Buffer_Edit *last_edit = batch + batch_size - 1; Buffer_Edit *last_edit = batch + batch_size - 1;
file_relex_parallel(system, mem, file, first_edit->start, last_edit->end, shift_total); file_relex_parallel(system, mem, file, first_edit->start, last_edit->end, shift_total);
} }
}break; }break;
@ -3251,6 +3237,7 @@ app_single_line_input_core(System_Functions *system, Working_Set *working_set,
case SINGLE_LINE_FILE: case SINGLE_LINE_FILE:
{ {
if (!key.modifiers[MDFR_CONTROL_INDEX]){
char end_character = mode.string->str[mode.string->size]; char end_character = mode.string->str[mode.string->size];
if (char_is_slash(end_character)){ if (char_is_slash(end_character)){
mode.string->size = reverse_seek_slash(*mode.string) + 1; mode.string->size = reverse_seek_slash(*mode.string) + 1;
@ -3260,6 +3247,10 @@ app_single_line_input_core(System_Functions *system, Working_Set *working_set,
else{ else{
mode.string->str[mode.string->size] = 0; mode.string->str[mode.string->size] = 0;
} }
}
else{
mode.string->str[mode.string->size] = 0;
}
}break; }break;
} }
} }
@ -3276,8 +3267,7 @@ app_single_line_input_core(System_Functions *system, Working_Set *working_set,
else if (key.character){ else if (key.character){
result.hit_a_character = 1; result.hit_a_character = 1;
if (!key.modifiers[MDFR_CONTROL_INDEX] && if (!key.modifiers[MDFR_CONTROL_INDEX] && !key.modifiers[MDFR_ALT_INDEX]){
!key.modifiers[MDFR_ALT_INDEX]){
if (mode.string->size+1 < mode.string->memory_size){ if (mode.string->size+1 < mode.string->memory_size){
u8 new_character = (u8)key.character; u8 new_character = (u8)key.character;
mode.string->str[mode.string->size] = new_character; mode.string->str[mode.string->size] = new_character;
@ -3309,12 +3299,11 @@ inline Single_Line_Input_Step
app_single_file_input_step(System_Functions *system, app_single_file_input_step(System_Functions *system,
Working_Set *working_set, Key_Event_Data key, Working_Set *working_set, Key_Event_Data key,
String *string, Hot_Directory *hot_directory, String *string, Hot_Directory *hot_directory,
b32 fast_folder_select, b32 try_to_match, b32 case_sensitive){ b32 try_to_match, b32 case_sensitive){
Single_Line_Mode mode = {}; Single_Line_Mode mode = {};
mode.type = SINGLE_LINE_FILE; mode.type = SINGLE_LINE_FILE;
mode.string = string; mode.string = string;
mode.hot_directory = hot_directory; mode.hot_directory = hot_directory;
mode.fast_folder_select = fast_folder_select;
mode.try_to_match = try_to_match; mode.try_to_match = try_to_match;
mode.case_sensitive = case_sensitive; mode.case_sensitive = case_sensitive;
return app_single_line_input_core(system, working_set, key, mode); return app_single_line_input_core(system, working_set, key, mode);
@ -3798,11 +3787,11 @@ step_file_view(System_Functions *system, View *view, View *active_view, Input_Su
switch (view->interaction){ switch (view->interaction){
case IInt_Sys_File_List: case IInt_Sys_File_List:
{ {
b32 use_item_in_list = 1; b32 autocomplete_with_enter = 1;
b32 activate_directly = 0; b32 activate_directly = 0;
if (view->action == IAct_Save_As || view->action == IAct_New){ if (view->action == IAct_Save_As || view->action == IAct_New){
use_item_in_list = 0; autocomplete_with_enter = 0;
} }
String message = {0}; String message = {0};
@ -3828,12 +3817,13 @@ step_file_view(System_Functions *system, View *view, View *active_view, Input_Su
for (i = 0; i < keys.count; ++i){ for (i = 0; i < keys.count; ++i){
key = get_single_key(&keys, i); key = get_single_key(&keys, i);
step = app_single_file_input_step(system, &models->working_set, key, step = app_single_file_input_step(system, &models->working_set, key,
&hdir->string, hdir, 1, 1, 0); &hdir->string, hdir, 1, 0);
if (step.made_a_change){ if (step.made_a_change){
view->list_i = 0; view->list_i = 0;
result.consume_keys = 1; result.consume_keys = 1;
} }
if (!use_item_in_list && (key.keycode == '\n' || key.keycode == '\t')){
if (!autocomplete_with_enter && key.keycode == '\n'){
activate_directly = 1; activate_directly = 1;
result.consume_keys = 1; result.consume_keys = 1;
} }
@ -3877,7 +3867,8 @@ step_file_view(System_Functions *system, View *view, View *active_view, Input_Su
set_last_folder_sc(&hdir->string, file_info.info->filename, '/'); set_last_folder_sc(&hdir->string, file_info.info->filename, '/');
do_new_directory = 1; do_new_directory = 1;
} }
else if (use_item_in_list){
else if (autocomplete_with_enter){
complete = 1; complete = 1;
copy_ss(&comp_dest, loop.full_path); copy_ss(&comp_dest, loop.full_path);
} }
@ -4388,6 +4379,7 @@ step_file_view(System_Functions *system, View *view, View *active_view, Input_Su
SHOW_GUI_BOOL(2, h_align, "show temp highlight", view_ptr->file_data.show_temp_highlight); SHOW_GUI_BOOL(2, h_align, "show temp highlight", view_ptr->file_data.show_temp_highlight);
SHOW_GUI_INT (2, h_align, "start temp highlight", view_ptr->file_data.temp_highlight.pos); SHOW_GUI_INT (2, h_align, "start temp highlight", view_ptr->file_data.temp_highlight.pos);
SHOW_GUI_INT (2, h_align, "end temp highlight", view_ptr->file_data.temp_highlight_end_pos); SHOW_GUI_INT (2, h_align, "end temp highlight", view_ptr->file_data.temp_highlight_end_pos);
SHOW_GUI_BOOL(2, h_align, "show whitespace", view_ptr->file_data.show_whitespace); SHOW_GUI_BOOL(2, h_align, "show whitespace", view_ptr->file_data.show_whitespace);
SHOW_GUI_BOOL(2, h_align, "locked", view_ptr->file_data.file_locked); SHOW_GUI_BOOL(2, h_align, "locked", view_ptr->file_data.file_locked);
@ -4871,7 +4863,6 @@ draw_file_loaded(View *view, i32_Rect rect, b32 is_active, Render_Target *target
case RenderStatus_NeedLineShift: break; case RenderStatus_NeedLineShift: break;
} }
}while(stop.status != RenderStatus_Finished); }while(stop.status != RenderStatus_Finished);
} }
i32 cursor_begin = 0, cursor_end = 0; i32 cursor_begin = 0, cursor_end = 0;
@ -4893,7 +4884,7 @@ draw_file_loaded(View *view, i32_Rect rect, b32 is_active, Render_Target *target
u32 main_color = style->main.default_color; u32 main_color = style->main.default_color;
u32 special_color = style->main.special_character_color; u32 special_color = style->main.special_character_color;
if (tokens_use){ if (tokens_use){
Cpp_Get_Token_Result result = cpp_get_token(&token_array, items->index); Cpp_Get_Token_Result result = cpp_get_token(token_array, items->index);
main_color = *style_get_color(style, token_array.tokens[result.token_index]); main_color = *style_get_color(style, token_array.tokens[result.token_index]);
token_i = result.token_index + 1; token_i = result.token_index + 1;
} }

View File

@ -144,6 +144,7 @@
; ;
; [X] eliminate the need for the lexer state's spare array. ; [X] eliminate the need for the lexer state's spare array.
; [X] fix buffer render item capacity issue ; [X] fix buffer render item capacity issue
; [X] tab to complete folder names in the new file dialogue
; Arbitrary wrap positions ; Arbitrary wrap positions
; [X] allow for arbitrary wrap positions independent of view width ; [X] allow for arbitrary wrap positions independent of view width
@ -151,6 +152,8 @@
; [X] get horizontal scrolling to work in line wrap mode ; [X] get horizontal scrolling to work in line wrap mode
; [] command for setting wrap positions in views directly ; [] command for setting wrap positions in views directly
; [] ability to see the wrap position as a number/line and adjust graphically ; [] ability to see the wrap position as a number/line and adjust graphically
; [] word level wrapping
; [] code level wrapping
; buffer behavior cleanup ; buffer behavior cleanup
; [X] show all characters as \# if they can't be rendered ; [X] show all characters as \# if they can't be rendered
@ -166,7 +169,6 @@
; [] user file bar string ; [] user file bar string
; [] API docs as text file ; [] API docs as text file
; [] read only files ; [] read only files
; [] tab to complete folder names in the new file dialogue
; [] option to hide hidden files ; [] option to hide hidden files
; [] control over how mouse effects panel focus ; [] control over how mouse effects panel focus
; [] option to not open *messages* every startup ; [] option to not open *messages* every startup
@ -175,18 +177,25 @@
; [] option to break buffer name ties by adding parent directories instead of <#> ; [] option to break buffer name ties by adding parent directories instead of <#>
; [] undo groups ; [] undo groups
; [] cursor/scroll groups ; [] cursor/scroll groups
; [] word level wrapping ~ temporary measure really want to have totally formatted code presentation
; [] double binding warnings ; [] double binding warnings
; ;
; ;
; [] the "main_4coder" experiment ; [] the "main_4coder" experiment
; [] real multi-line editing ; [] real multi-line editing
; [] multi-cursor editing ; [] multi-cursor editing
; [] matching brace/paren/#if#endif highlighting
; [] word repetition highlighting
; [] simple text based project file
; [] system commands bound to <ctrl #> in project file
; [] find matches for current identifier
; [] ability to save and reopen the window state
; [] API docs have duplicate ids? ; [] API docs have duplicate ids?
; [] introduce custom command line arguments ; [] introduce custom command line arguments
; [] control the file opening/start hook relationship better ; [] control the file opening/start hook relationship better
; [] get keyboard state on launch ; [] get keyboard state on launch
; [] never launch two 4coders unless special -R flag is used
; meta programming system ; meta programming system
; [X] condense system into single meta compiler ; [X] condense system into single meta compiler
@ -224,8 +233,8 @@
; control schemes ; control schemes
; [] emacs style sub-maps ; [] emacs style sub-maps
; [] vim style modes ; [] vim style modes
; [] sublime style editing
; [] "tap typing" ; [] "tap typing"
; [] "thin cursor"
; [] command meta data ; [] command meta data
; [] macros ; [] macros
; ;
@ -233,7 +242,7 @@
; code engine ; code engine
; [X] lexer with multiple chunk input ; [X] lexer with multiple chunk input
; [X] more correct auto-indentation ; [X] more correct auto-indentation
; [] switch over to gap buffer ; [X] switch over to gap buffer
; [] preprocessor ; [] preprocessor
; [] AST generator ; [] AST generator
; ;
@ -257,7 +266,6 @@
; ;
; [] tutorials ; [] tutorials
; [] 4edT thing ; [] 4edT thing
; [] unicode/UTF support
; [] console emulator ; [] console emulator
; ;
@ -280,13 +288,11 @@
; HARD BUGS ; HARD BUGS
; [X] reduce cpu consumption ; [X] reduce cpu consumption
; [X] repainting too slow for resize looks really dumb ; [X] repainting too slow for resize looks really dumb
; [] fyoucon's segfaults with malloc on win10 ; [X] fill screen right away
; [X] minimize and reopen problem (fixed by microsoft patch aparently)
; [] handling cursor in non-client part of window so it doesn't spaz ; [] handling cursor in non-client part of window so it doesn't spaz
; [] fill screen right away
; [] history breaks when heavily used? (disk swaping?)
; ;
; [] a triangle rendered for a few frames? color of the dirty markers (not reproduced by me yet) ; [] a triangle rendered for a few frames? color of the dirty markers (not reproduced by me yet)
; [] minimize and reopen problem (not reproduced by me yet)
; ;
; ;