Giter Club home page Giter Club logo

mrtos's People

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

mrtos's Issues

Provisional features

Thread
Multi-threading

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Suspend/resume
  4. Get/Update priority
  5. Delay
  6. Yield
  7. Critical section

Memory
Efficient memory usage

  1. Dynamic allocation and deallocation

Queue
Inter-process communication

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Reset queue (clear all data)
  4. Peek push, non-blocking push and push with timeout
  5. Peek pop, non-blocking pop and pop with timeout

Semaphore
Inter-process synchronization

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Post
  4. Peek wait, non-blocking wait, and wait with timeout

Mutex
Inter-process synchronization

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Peek lock, non-blocking lock, and lock with timeout
  4. Peek recursive lock, non-blocking recursive lock and recursive lock with timeout

Signal

Inter-process, process-interrupt communication

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Wait with info struct and timeout
  4. Send

Queue implementation: cannot send larger data than queue buffer

If the data is larger than the queue buffer when sending to a queue, the send function will always fail. It might be desirable to change the queue implementation so that it allows data to be sent in smaller blocks as soon as the queue has free space.

Thread running when state == BLOCKED

When calling a blocking function in a critical section, the context switch will not get serviced until the lock is released. If a user application mistakenly calls a blocking function inside a critical section, the blocking function will return false when trying to block because it was never able to yield the context and thus the requested resource will never get serviced. However, because the thread is still in the resource's waiting list, the scheduler will still try to access the scheduling info but it is already released because the blocking function has returned. This will result in memory corruption. See example below

// assume the semaphore is 0 
os_enter_critical();
// schinfo block allocated on the stack by os_semaphore_wait
b_result = os_semaphore_wait( h_sem, 1000); 
// thread joined wait queue, requested yield, but not serviced
// schinfo released, the function returns false
os_exit_critical();
// context switch happens here, during this time the scheduler attempts to fetch
// schinfo to allocate resource, but it was already released because os_semaphore_wait
// returned.

Currently sleeping with a lock held is not allowed in the operating system because this will break the lock. The purpose of the lock is that no other context can gain access to the CPU and thus the shared resource. A possible solution to this is to allow sleeping with lock. This will enable interrupts immediately before generating the context switch request and disable the interrupt as soon as the context resumes. This will require each thread to keep a copy of the critical nesting counter when yielding, and will still need the global critical nesting counter for interrupts. The global critical nesting counter will also need to be made visible to the scheduler. The user must ensure shared modifications does not get interrupted by the sleep function.

Queue peek with no data transfer

Sometimes it is desirable to peek at the queue to wait for it to reach a certain fill level. No data transfer is needed in this case. Currently the queue doesn't support peeking with a NULL destination buffer.

The following features can be added:

os_queue_peek(h_q, NULL, 512, 50);
os_queue_peek_nb(h_q, NULL, 512);

Queue does not unblock threads

When calling one of the following functions

os_queue_send();
os_queue_send_nb();
os_queue_send_ahead();
os_queue_send_ahead_nb();
os_queue_receive();
os_queue_receive_nb();

if there are threads blocked on the queue, the threads do not get unblocked when data is ready. This is because the following function call is missing from these functions

queue_unlock_threads(p_q, &g_sch);

Dynamic thread memory leakage

Memory not released after dynamic thread exits.

Replication:

  • Call os_memory_get_pool_info to obtain pool size: 27468
  • Create a thread using os_thread_create
  • Thread may or may not call os_memory_allocate and os_memory_free
  • Thread exits
  • Call os_memory_get_pool_info again to obtain pool size: 26352
static void user_thread(void)
{
	ushell_printf(TTY_WARN "Dynamic thread started.\n\r" );
	for(unsigned i = 0; i < 10; i++ )
	{
		ushell_printf(TTY_INFO "Hello %u\n\r", i);
		os_thread_delay(100);
	}

	ushell_printf(TTY_WARN "Dynamic thread exiting...\n\r" );
}

void cmd_create_thread( char *argv[], int argc )
{
	(void)argv;
	(void)argc;
	ushell_printf(TTY_INFO "running os_thread_create...\n\r" );
	os_thread_create(2, 1024, user_thread);
}
ushell version 0.2
> 
help
info
lscmd
exit
args
demo
poolinfo
> poolinfo
num blocks in pool 1
pool size 27468

> demo
[INFO]  running os_thread_create...

> [WARN]  Dynamic thread started.
[INFO]  Hello 0
[INFO]  Hello 1
[INFO]  Hello 2
[INFO]  Hello 3
[INFO]  Hello 4
[INFO]  Hello 5
[INFO]  Hello 6
[INFO]  Hello 7
[INFO]  Hello 8
[INFO]  Hello 9
[WARN]  Dynamic thread exiting...

> poolinfo
num blocks in pool 1
pool size 26352

> 

Casting ``struct lstitem_s`` from other list element types is unsafe

struct lstitem_s
{
	struct lstitem_s *volatile p_prev; /* previous item */
	struct lstitem_s *volatile p_next; /* next item     */
};

struct mblk_s
{
	struct mblk_s *volatile p_prev; /* previous block */
	struct mblk_s *volatile p_next; /* next block     */
	volatile uint_t size;           /* block size     */
	struct mlst_s *volatile p_mlst; /* parent list    */
};

The RTOS reuses its linked list code by defining insertion/removal methods for struct lstitem_s and then performs casts to other linked list element types like mblk_s to reuse some of the methods. Both lstitem_s and mblk_s have two pointers at the beginning of the struct. This however, may not always work, because no assumptions can be made on the internal layout of the structs even though they are defined very similarly.

There is something in the C standards called a 'common initial sequence', where multiple structs of different types with a common initial sequence when defined as members of a union share the same memory layout in their common part. However, in this case, even though the common part are both pointers, the pointers are to different types, so they are not generally considered a 'common initial sequence'.

It is unlikely that the compiler will generate different memory layouts for the initial part of these structs, but there is no guarantee that it wouldn't because this type of conversion is not backed by the language standards.

Removing dynamic memory features

Planned removal of dynamic memory features because they are intrinsically incompatible with memory protection units (a safety feature on many MCUs).

It is also determined that in reality most of the operating system object are created statically at compile time. Although the RTOS supports creating static objects, it should be made to be the primary way of doing things.

Dynamic memory may be provided as a feature completely separately from the RTOS. Allocation of memory from a specific memory region should be supported because most memory protection units only supports a limited number of regions. Moving dynamic memory out of RTOS would mean that it no longer performs garbage collection, that is, if a thread quits or objects get destroyed, the operating system will not try to reclaim the memory because it is completely unaware.

Changing scheduler queue implementation

https://github.com/jdoe95/rtos/blob/284e2765ec94535a798d5837dc8fa1bdba99def5/rtos/include/scheduler.h#L84

Problem description

It is originally proposed that when blocked for resource, sch_citem_t.p_schinfo will point to a struct containing a queue item, either FIFO or priority, that allows the task control block to be registered in the waiting queue of the resource. However, doing this will not allow os_thread_delete or similar functions to discover the item and safely remove it from the waiting queue should the blocked thread be deleted. Thus, for the sake of os_thread_delete, the resource queue item should be visible in the thread control block. On the other hand, it is discovered that sch_citem_t.sch_item can potentially be reused as a resource queue item, since it is basically not used by the scheduler when blocked (inserted into sch_cblk_t.q_block and left alone until readied). The following two solutions are shown to exploit this instead of introducing a resource item.

Solution 1

The type of item this is (FIFO or priority) depends on the resource. This solution offers two types of queue items using the space of one.

/*
 * Scheduler control item
 */
struct sch_citem_s
{
	union /* scheduling item */
	{
		struct sch_fifoq_item_s sch_item_fifo; /* FIFO scheduling item */
		struct sch_prioq_item_s sch_item_prio; /* priority scheduling item */
	} sch_item;

	struct sch_prioq_item_s delay_item; /* delay item */
	volatile os_counter_t prio; /* item priority */
	volatile os_sched_state_t state; /* item state */
	void *volatile p_container; /* container */
	void *volatile p_schinfo; /* scheduling info block */
};

Depending on its need, the resource can then choose to push either scheduling item onto its queue. However, doing this increases the chances of potential misuse of the wrong type of queue function and also, it is hard to figure out a way to make the initialization of sch_item less misleading. Moreover, if the members in sch_item_fifo and sch_item_prio are not arranged the same way, key data can be overwritten.

Solution 2

Instead of introducing yet another queue item, The focus is to make sch_citem_t.sch_item to support both priority and FIFO queue operations. To do this, FIFO queue items and priority queue items are no longer differentiated, instead, they are morphed into a single item, the memory usage of which is the same as Solution 1.

/*
 * Scheduler queue item
 * Order of members makes a difference.
 */
struct sch_qitem_s
{
	struct sch_qitem_s *volatile p_prev; /* previous item */
	struct sch_qitem_s *volatile p_next; /* next item */
	struct sch_citem_s *volatile p_container; /* container */
	void *volatile p_q; /* mother queue */
	volatile os_counter_t tag; /* tag value for ordering */
};

The two queue headers are then defined separately so that the compiler can check if the right queue function is used on the queue type.

/*
 * Scheduler FIFO queue header
 */
struct sch_fifoq_s
{
	struct sch_qitem_s *volatile p_first; /* first item */
};

/*
 * Scheduler priority queue header
 */
struct sch_prioq_s
{
	struct sch_qitem_s *volatile p_first; /* first item */
};

void sch_fifoq_??( struct sch_fifoq_s *p_q, ...);
void sch_prioq_??( struct sch_prioq_s *p_q, ... );

The second solution works the same as the first but is easier to implement, debug, and understand.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.