jdoe95 / mrtos Goto Github PK
View Code? Open in Web Editor NEWSmall footprint Real Time Operating System
License: MIT License
Small footprint Real Time Operating System
License: MIT License
Inter-process, process-interrupt communication
If the data is larger than the queue buffer when sending to a queue, the send function will always fail. It might be desirable to change the queue implementation so that it allows data to be sent in smaller blocks as soon as the queue has free space.
When calling a blocking function in a critical section, the context switch will not get serviced until the lock is released. If a user application mistakenly calls a blocking function inside a critical section, the blocking function will return false when trying to block because it was never able to yield the context and thus the requested resource will never get serviced. However, because the thread is still in the resource's waiting list, the scheduler will still try to access the scheduling info but it is already released because the blocking function has returned. This will result in memory corruption. See example below
// assume the semaphore is 0
os_enter_critical();
// schinfo block allocated on the stack by os_semaphore_wait
b_result = os_semaphore_wait( h_sem, 1000);
// thread joined wait queue, requested yield, but not serviced
// schinfo released, the function returns false
os_exit_critical();
// context switch happens here, during this time the scheduler attempts to fetch
// schinfo to allocate resource, but it was already released because os_semaphore_wait
// returned.
Currently sleeping with a lock held is not allowed in the operating system because this will break the lock. The purpose of the lock is that no other context can gain access to the CPU and thus the shared resource. A possible solution to this is to allow sleeping with lock. This will enable interrupts immediately before generating the context switch request and disable the interrupt as soon as the context resumes. This will require each thread to keep a copy of the critical nesting counter when yielding, and will still need the global critical nesting counter for interrupts. The global critical nesting counter will also need to be made visible to the scheduler. The user must ensure shared modifications does not get interrupted by the sleep function.
Sometimes it is desirable to peek at the queue to wait for it to reach a certain fill level. No data transfer is needed in this case. Currently the queue doesn't support peeking with a NULL
destination buffer.
The following features can be added:
os_queue_peek(h_q, NULL, 512, 50);
os_queue_peek_nb(h_q, NULL, 512);
When calling one of the following functions
os_queue_send();
os_queue_send_nb();
os_queue_send_ahead();
os_queue_send_ahead_nb();
os_queue_receive();
os_queue_receive_nb();
if there are threads blocked on the queue, the threads do not get unblocked when data is ready. This is because the following function call is missing from these functions
queue_unlock_threads(p_q, &g_sch);
Memory not released after dynamic thread exits.
Replication:
os_memory_get_pool_info
to obtain pool size: 27468os_thread_create
os_memory_allocate
and os_memory_free
os_memory_get_pool_info
again to obtain pool size: 26352static void user_thread(void)
{
ushell_printf(TTY_WARN "Dynamic thread started.\n\r" );
for(unsigned i = 0; i < 10; i++ )
{
ushell_printf(TTY_INFO "Hello %u\n\r", i);
os_thread_delay(100);
}
ushell_printf(TTY_WARN "Dynamic thread exiting...\n\r" );
}
void cmd_create_thread( char *argv[], int argc )
{
(void)argv;
(void)argc;
ushell_printf(TTY_INFO "running os_thread_create...\n\r" );
os_thread_create(2, 1024, user_thread);
}
ushell version 0.2
>
help
info
lscmd
exit
args
demo
poolinfo
> poolinfo
num blocks in pool 1
pool size 27468
> demo
[INFO] running os_thread_create...
> [WARN] Dynamic thread started.
[INFO] Hello 0
[INFO] Hello 1
[INFO] Hello 2
[INFO] Hello 3
[INFO] Hello 4
[INFO] Hello 5
[INFO] Hello 6
[INFO] Hello 7
[INFO] Hello 8
[INFO] Hello 9
[WARN] Dynamic thread exiting...
> poolinfo
num blocks in pool 1
pool size 26352
>
Line 362 in 0ded6f1
Lock nesting counter must be set every time os_sch_lock_int is called.
struct lstitem_s
{
struct lstitem_s *volatile p_prev; /* previous item */
struct lstitem_s *volatile p_next; /* next item */
};
struct mblk_s
{
struct mblk_s *volatile p_prev; /* previous block */
struct mblk_s *volatile p_next; /* next block */
volatile uint_t size; /* block size */
struct mlst_s *volatile p_mlst; /* parent list */
};
The RTOS reuses its linked list code by defining insertion/removal methods for struct lstitem_s
and then performs casts to other linked list element types like mblk_s
to reuse some of the methods. Both lstitem_s
and mblk_s
have two pointers at the beginning of the struct. This however, may not always work, because no assumptions can be made on the internal layout of the structs even though they are defined very similarly.
There is something in the C standards called a 'common initial sequence', where multiple structs of different types with a common initial sequence when defined as members of a union share the same memory layout in their common part. However, in this case, even though the common part are both pointers, the pointers are to different types, so they are not generally considered a 'common initial sequence'.
It is unlikely that the compiler will generate different memory layouts for the initial part of these structs, but there is no guarantee that it wouldn't because this type of conversion is not backed by the language standards.
Planned removal of dynamic memory features because they are intrinsically incompatible with memory protection units (a safety feature on many MCUs).
It is also determined that in reality most of the operating system object are created statically at compile time. Although the RTOS supports creating static objects, it should be made to be the primary way of doing things.
Dynamic memory may be provided as a feature completely separately from the RTOS. Allocation of memory from a specific memory region should be supported because most memory protection units only supports a limited number of regions. Moving dynamic memory out of RTOS would mean that it no longer performs garbage collection, that is, if a thread quits or objects get destroyed, the operating system will not try to reclaim the memory because it is completely unaware.
It is originally proposed that when blocked for resource, sch_citem_t.p_schinfo will point to a struct containing a queue item, either FIFO or priority, that allows the task control block to be registered in the waiting queue of the resource. However, doing this will not allow os_thread_delete or similar functions to discover the item and safely remove it from the waiting queue should the blocked thread be deleted. Thus, for the sake of os_thread_delete, the resource queue item should be visible in the thread control block. On the other hand, it is discovered that sch_citem_t.sch_item can potentially be reused as a resource queue item, since it is basically not used by the scheduler when blocked (inserted into sch_cblk_t.q_block and left alone until readied). The following two solutions are shown to exploit this instead of introducing a resource item.
The type of item this is (FIFO or priority) depends on the resource. This solution offers two types of queue items using the space of one.
/*
* Scheduler control item
*/
struct sch_citem_s
{
union /* scheduling item */
{
struct sch_fifoq_item_s sch_item_fifo; /* FIFO scheduling item */
struct sch_prioq_item_s sch_item_prio; /* priority scheduling item */
} sch_item;
struct sch_prioq_item_s delay_item; /* delay item */
volatile os_counter_t prio; /* item priority */
volatile os_sched_state_t state; /* item state */
void *volatile p_container; /* container */
void *volatile p_schinfo; /* scheduling info block */
};
Depending on its need, the resource can then choose to push either scheduling item onto its queue. However, doing this increases the chances of potential misuse of the wrong type of queue function and also, it is hard to figure out a way to make the initialization of sch_item less misleading. Moreover, if the members in sch_item_fifo and sch_item_prio are not arranged the same way, key data can be overwritten.
Instead of introducing yet another queue item, The focus is to make sch_citem_t.sch_item to support both priority and FIFO queue operations. To do this, FIFO queue items and priority queue items are no longer differentiated, instead, they are morphed into a single item, the memory usage of which is the same as Solution 1.
/*
* Scheduler queue item
* Order of members makes a difference.
*/
struct sch_qitem_s
{
struct sch_qitem_s *volatile p_prev; /* previous item */
struct sch_qitem_s *volatile p_next; /* next item */
struct sch_citem_s *volatile p_container; /* container */
void *volatile p_q; /* mother queue */
volatile os_counter_t tag; /* tag value for ordering */
};
The two queue headers are then defined separately so that the compiler can check if the right queue function is used on the queue type.
/*
* Scheduler FIFO queue header
*/
struct sch_fifoq_s
{
struct sch_qitem_s *volatile p_first; /* first item */
};
/*
* Scheduler priority queue header
*/
struct sch_prioq_s
{
struct sch_qitem_s *volatile p_first; /* first item */
};
void sch_fifoq_??( struct sch_fifoq_s *p_q, ...);
void sch_prioq_??( struct sch_prioq_s *p_q, ... );
The second solution works the same as the first but is easier to implement, debug, and understand.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.