Giter Club home page Giter Club logo

cordetfw's Introduction

CORDET Framework

Build Status

The CORDET Framework is a software framework for service-oriented applications. It defines an application in terms of the services it provides to other applications and in terms of the services it uses from other applications. A service is implemented by a set of commands through which an application is asked to perform certain activities and by a set of reports through which an application gives visibility over its internal state. The CORDET Framework defines the components to receive, send, distribute, and process commands and reports. The CORDET service concepts supports the definition of distributed systems where individual applications residing on distribution nodes interact with each other by exchanging commands and reports.

The specification of the CORDET Framework is language-independent. This project provides a C-language implementation of the CORDET Framework. Its chief characteristics are:

  • Well-Defined Semantics: unambiguously defined behaviour.
  • Minimal Memory Requirements: core module footprint of around 20 kBytes.
  • Small CPU Demands: efficient implementation in C.
  • Excellent Scalability: memory footprint and CPU demands are independent of number of supported services.
  • High Reliability: test suite with 100% code, branch, and condition coverage.
  • Formal Specification: user requirements formally specify the implementation.
  • Requirement Traceability: requirements individually traced to implementation and verification evidence.
  • Documented Code: doxygen documentation for all the source code.

These characteristics make the C2 Implementation especially well-suited for use in embedded and mission-critical applications.

The C2 Implementation of the CORDET Framework has been used by the Dept. of Astrophysics at the University of Vienna for the development of the payload software of the CHEOPS satellite; it is being used for the development of a payload application on the SMILE satellite; and it is baselined for use in the development of a payload application on the ARIEL satellite.

Web Site

An introduction to the CORDET Framework and its documentation can be found here.

PUS Extension

The Packet Utilization Standard (PUS) has been introduced by the European Space Agency (ESA) to define the protocol through which on-board applications make their services available to each other and to the ground. The CORDET Framework uses the service concept of the PUS. Its PUS Extension provides implementations for the most commonly used PUS services. The PUS Extension of the CORDET Framework is currently under development and will be published in a dedicated repository in the near future. Access to the development version of the PUS Extension of the CORDET Framework is available on request.

Releases

Ownership

The owner of the project is P&P Software GmbH.

License

Free use of this software is granted under the terms of the Mozilla Public Licence v2, see LICENSE.

cordetfw's People

Contributors

cechticky avatar oppm avatar pasetti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

mohdbilal

cordetfw's Issues

Coding of Procedures Modelled in FW Profile Tool

Consider the procedure implementing the Start Action for TC(17,3). This procedure is modelled in the FW Profile Tool.

The model of this procedure in the FW Profile Tool does not contain any implementation-level information (i.e. there are no names for the functions which implement the procedure actions and guards). Does this mean that the code implementing this procedure was entirely written manually?

Acknowledgement of Request Execution

In the PUS, each request (command) carries four flags which determine whether successful acceptance, start, progress and completion of that request should be reported to the request originator. According to clause 5.4.11.2.2, these acknowledge flags only concern the reporting of verifications performed at the level of the request. The PUS appears to be silent about the conditions under which the outcome of instruction-level verifications should be reported. I assume that this means that it is up to implementations to decide when and if instruction-level verification outcomes should be reported.

In our framework, for instance, we would like to take the approach that, for instructions, only execution failures are reported and that they are reported unconditionally. Would this approach contravene the PUS?

Constraint on Sequential Processing of Commands

Should we add a constraint that a command in service S is only accepted or started if no other commands for the same service are pending or being executed?

This could be either a constraint on the user or else it could be included in the command acceptance or start process.

Doxygen Documentation of InLoader is Misleading

The doxygen documentation of InLoader implies that the component will load one or more components from the InStream. This is incorrect: at every execution, the InLoader only loads one single InReport or InCommand from an InStream.

This point is clear in the CORDET FW Definition Document.

Extension of Release Process

I would like to make a proposal for extending the release process for the CORDET Framework (and also for the the FW Profile). This release process is implemented by script Release.sh. This actually works very well but it has two weaknesses:

  • It relies on the user having checked out the latest version of all dependent repositories before the script is executed, and
  • It relies on the user to tag the dependent repositories

With an extended process, the Release script might work as follows:

  • Create a new empty directory
  • Clone all dependent repositories in the empty directory
  • Generate the delivery file (same actions as done by the current version of Release.sh)
  • Tag all dependent repositories on GitHub

Since the same repository may be used both in the CORDET and in the FW Profile projects, I would propose that we use a tagging approach which allows us to distinguish between tags for one or the other project. This might be something as follows:

  • For the CORDET Framework, the tags have the form: crX.Y.Z, where X.Y.Z is the version number
  • For the FW Profile, the tags have the form: fwX.Y.Z

Outcome of Start Action of TC(17,4)

I have looked at file CrPsCmd17s3StartFunc.c holding the implementation of the actions and guards of the Start Action of TC(17,4). Nodes N7 and N8 must report the outcome of the Start Action. At present, this is done using a global variable outcomeStart which, I believe, is defined in CrPsTestOnBoardConnection.c.

I believe that we should avoid using this kind of global variables because this usage is normally precluded by coding standard for critical software.

One alternative option could be to use the data field of the procedure descriptor. In fact, this is a common situation (we have many procedures which must return an outcome), you might consider the same approach as used in the CORDET Framework where all components (either state machines or procedures) carry an instance of the same data structure of type CrFwCmpData and this data structure include a generic 'outcome' field.

Data Structure for OutRegistry

The data structure CrFwServDesc_t is defined in the OutRegistry where it is used hold information about the range of service type/sub-types supported by an application.

One source of inefficiency in this data structure is that it stores the enable status in an array which is sized by the highest value of the discriminant associated to a [type,sub-type] pair. I could improve efficiency by asking users to declare both the lowest and the highest value of discriminant associated to a given [type,sub-type] pair.

Aux directory

Aux is a reserved file name on Windows XX, and such cannot be created. Impact: A user cannot install CORDET FW on a Windows based system.

Packet Collect Procedure

This procedure is implemented in DoActionB in CrFwInStream.c. The implementation is slightly different from the specification in the case of commands with a destination different from the host application (in that case there is no group check). The implementation is actually better than the specification and I should probably update the specification.

Reporting Progress Failure through Completion Failure Report

Clause 6.1.5.3.2 states that: "For each failed completion of execution notification that is accompanied of failed progress of executions notifications to be reported as part of the completion of execution verification report, the execution reporting subservice shall include those failed progress of execution notifications in the failed completion of execution notification". The practical implications of this requirement are not understood. Suppose that we have a situation where a command has generated five Failed-Progress-Of-Execution notifications and that it has been agreed that these must be included in the Failed-Completiong-Of-Execution notification. I assume that this means that the TM(1,8) for this command will have to somehow include the five failure codes (and any associated auxiliary data) for the five Failed-Progress-Of-Execution notifications. But how can this be done in view of the fact that the layout of the TM(1,8) in clause 8.1.2.8 only includes one single Failure Notice?

NULL Pointer in EnqueuePckt and SendOrEnqueue

These functions are executed in response to a request to send a packet through an OutStream. The functions make a copy of the packet to be sent. The copy is made in a packet created through a call to CrFwPcktMake. The situation where this call returns a NULL pointer (because we have run out of memory in the packet factory) is not handled.

Note that the situation where CrFwPcktMake fails to return a packet is already handled as an application error (error code: crPcktAllocationFail) but there is at present no protection against the use of the NULL pointer returned by CrFwPcktMake.

Order of Sending of Out-Going Reports and Commands

There is no requirement that out-going reports and commands be sent out in the same order in which they have been generated. in fact, at present, there is an explicit warning that the order in which requests for OutComponents to go out are handled is undefined. Experience from the CHEOPS project (see Mantis Ticket 1135) indicates that this is not adequate and that we should enforce a FIFO order for handling requests for out-going reports and components.

Dependency on fwprofile

The current implementation assumes that fwprofile is located in a CrFramework folder just next to the cordetfw folder. This is not very nice and is not working with travis anyway. Unless we want to copy-paste the fwprofile files, we should link the two projects. This is done with submodules. I will create the accordant files and update the make- and header-files accordingly.

Constraint on Loading Out-Going Commands through OutLoader

We currently have a user constraint that a user should only load an out-going command or report through the OutLoader. However, the Load operation in the OutLoader does not do anything beyond selecting the OutManager. We should therefore down-grade this constraint to a "should" constraint to allow users to also directly load the out-going component into an OutManager.

Validity Check for Incoming Reports

The Validity Check is listed as one of the implementation-level adaptation points for the framework but we do not have a clear explanation of how this check relates to the specification-level check of InReports.

An InReport has one single check which is the acceptance check. This is executed by the InLoader which verifies the following conditions (see section 15 of CORDET FW User Manual):

  • The incoming packet holding the InReport has an invalid type;
  • The InFactory fails to return a component to hold the InReport encapsulated in the incoming packet;
  • The InReport fails to enter state CONFIGURED;
  • The InReport fails to be loaded into the InManager.

At implementation level, the Validity Check implements the third check in the list above. This points needs to be made clear in the framework document.

Incorrect Handling of Invalid Destination in InLoaderExecAction

There is an error in the framework code which handles the case of the invalid destination. The correct code should be like this:

/* Check whether packet should be re-routed to another destination */
reroutingDest = getReroutingDest(pcktDest);
if (reroutingDest == 0) { /* destination is invalid */
    CrFwRepErrInstanceIdAndDest(crInLoaderInvDest, inLoaderData.typeId, inLoaderData.instanceId,
                                CrFwPcktGetCmdRepId(pckt), pcktDest);     // error is here
    CrFwPcktRelease(pckt);
    return;
}

Wrong Type in Call to malloc

This point was first brought up by ESA when they did their static code analysis of the CORDET Framework code (see e-mail from Roland dated 11 October 2016).

In function CrFwPcktQueueInit of module CrFwPcktQueue, the following statement is made:

pcktQueue->pckt = malloc(size*sizeof(CrFwPckt_t*));

This statement might be incorrect and the correct statement is probably:

pcktQueue->pckt = malloc(size*sizeof(CrFwPckt_t));

I assign the ticket to Marcel so that he may check this matter and update the code if needed.

Discarding Service 13 Commands After Uplink Abort

Requirement 6.13.4.3.2i in the PUS states:

For each large packet uplink that is aborted, the receiving entity of the large packet uplink subservice shall: 1. generate a single large packet uplink abortion notification that includes the reason of that abortion; 2. discard that large packet and the related uplink part requests.

What does "discard" mean in this context? I interpret it as meaning that the command fails its Start-Of-Execution Check and that it is rejected with a TM(1,4). Is this interpretation correct (or at least consistent with the PUS)?

Typos and Editorials

(1) "...Depending on the characteristics of the middleware, only one InStream
component may..." --> "...Depending on the characteristics of the middleware, only one OutStream component may..."

(2) confguration in CwFwOutCmp.h

(3) The doxygen documentation of OutCmp has an incomplete list of adaptation points (the Update Action and the Repeat Check are missing)

(4) Typo in Table 8.2 of CORDET User Manual
Acceptance Check: The part of the acceptance check which verifies validity of the REPORT type and availability of resources is implemented in the Load Command/Report Procedure of the InLoader (see section 15). The REPORT-specific part of the acceptance check is implemented in the Validity Check Operation specified through a function pointer in the CR FW INREP INIT KIND DESC initializer.

(5) Typo in CORDET Requirement IRP-3
IRP-3/S: The InReport component shall provide visibility over the value of all the attributes of the REPORT it encapsulates.

(6) Typoes in "Default Value" column of entries IDL-2 to IDL-4 in table 6.11 of CORDET FW Definition Document.

Setting the Discriminant of OutComponents

The current philosophy is that for each kind of out-going report, we must have a row in CR_FW_OUTCMP_INIT_KIND_DESC. Here, a report kind is determined by the triplet: type/subtype/discriminant. In the PUS world, this approach is not efficient for (3,25) reports where we can have a very large number of discriminant types and where all (3,25) reports have the same set of customization functions.

In this case, it would be better to proceed as follows:

  • Only one entry is made for CR_FW_OUTCMP_INIT_KIND_DESC with the discriminant set to zero
  • The application retrieves the report from the OutFactory by specifying a discriminant of value zero and then it sets the discriminant as part of the report configuration

This approach is at present not possible because the discriminant cannot be set (it is set only by the CrFwOutFactoryMakeOutCmp function). This should be changed as follows:

  • In the doxygen documentation of CrFwOutFactoryMakeOutCmp, we explain that the function sets the discriminant but this can be overridden by users
  • In module CrFwOutCmp, we add a function to set the discriminant

Doxygen Comment for Base Component Module

The doxygen comment for the "Base Component" Module only includes the Base State Machine. It should be extended to also include the Initialization and Configuration Procedures.

Question on Definition Data Pool Items in PUS Extension Project

I am looking at the data pool definition in the CORDET Editor (PUS Extension Project) and I have the following questions:

(a) Why did you define user-specific types for unsigned char, unsigned short and unsigned integer? There are pre-defined types within the tool itself for these common types.

(b) Why is the default value of data items like nOfAccFailed or nOfPrgrFailed set to 1? Shouldn't the default value of these data items be zero?

Coding Style

In Mantis Issue 705, Michael had proposed the following tool for enforcing/checking the coding style of the CORDET Framework:

[1] http://sourceforge.net/projects/astyle/ [^]

The following astyle command seems to match the style that is being used in the code.

astyle --style=java --indent=tab --indent-switches --unpad-paren --pad-header --keep-one-line-statements --keep-one-line-blocks --align-pointer=type --lineend=linux --suffix=none --quiet

Adaptation Point for CRC Computation

Feedback from CHEOPS Project (Mantis Issue 756): I should add an adaptation point to compute the CRC of an out-going packet (taking care not to do it in the case of packets which are being re-routed) and I should add the CRC as a pre-defined field in packets. Note that this cannot be done with the current design because the CRC must be computed after all packet fields have been set (inclusive of time-stamp, source, etc) and there is no adaptation point so late in the processing of an out-going packet.

Implementation of Ready Check of (17,3)

The current implementation of the ready check of TC(17,3) is as follows:

CrFwBool_t CrPsTestOnBoardConnectionReadyCheck(FwSmDesc_t __attribute__((unused)) smDesc) 
{
  CrFwCmpData_t*   inData;
  CrFwInCmdData_t* inSpecificData;
  CrFwPckt_t       inPckt;

  /* Return 'command is ready' */

  printf("CrPsTestOnBoardConnectionReadyCheck()\n");

  /* Get in packet */
  inData          = (CrFwCmpData_t*)FwSmGetData(smDesc);
  inSpecificData  = (CrFwInCmdData_t*)inData->cmpSpecificData;
  inPckt          = inSpecificData->pckt;

  /* Send Request Verification Acceptance Successful out-going report */
  SendReqVerifAccSuccRep(inPckt);

  return 1; /* always True */
}

This implementation seems wrong to me. The Ready Check returns TRUE when a command is ready to be executed and it returns FALSE otherwise. The Ready Check of TC(17,3) is specified to always return TRUE (see table 13.3 in the PUS Extension Specification Document). Hence, the function shown above should simply return 1. As a matter of fact, the Ready Check of all PUS Extension commands will just be a dummy function which returns 1 and you can simply use the same dummy function for all commands.

The implementation above sends "Acceptance Success Report" from within the Ready Check. This, too, seems wrong to me: the management of the Acceptance Success/Failure Reports is done by the InLoader component (see figure 15.2 of the CORDET User Manual) and should therefore be handled by the CORDET infrastructure.

Type of CrFwPckt_t

On 18 Feb., Roland writes to say that, in the CORDET Framework, CrFwPckt_t is a char* (as defined in CrFramework/src/CrFwConstants.h). He thinks it should be a pointer to 'unsigned char' to avoid sign extensions.

Incorrect Comment on Instance Identifier for OutComponents

The following is stated in the comment to the CrFwOutCmpMake function in the OutFactory:

The value of the instance identifier is built as follows. Let n be the number of OutComponents made by the factory since it was last reset; let APP_ID be the application identifier (see CR_FW_HOST_APP_ID); and let m be the number of bits reserved for the application identifier (see CR_FW_NBITS_APP_ID). The instance identifier is then given by: APP_ID*(2**m)+n.

This is incorrect. The correct comment is:

The value of the instance identifier is built as follows. Let n be the number of OutComponents made by the factory since it was last reset; let APP_ID be the application identifier (see CR_FW_HOST_APP_ID); let m be the number of bits reserved for the application identifier (see CR_FW_NBITS_APP_ID); and let s be the number of bits of the instance identifier. The instance identifier is then given by: APP_ID*(2**(h-m))+n.

Incorrect Definition of CrFwRepErrCode_t in Doxygen Documentation

The type CrFwRepErrCode_t defines an enumerated type with the list of error conditions handled by the framework. In the doxygen documentation, this enumerated type is shown as a table which list the values of the enumerated type but each value appears four times! This is probably because this type is defined in file CrFwUserConstant.h and there four instances of this file in the set of files covered by Doxygen (one for the test suite and three for the demo applications).

The same problem occurs for other types which are defined multiple times.

If possible, we should try to avoid this duplication of information which is obviously confusing for the user.

Consequences of Failures of Progress Step

Question 1

Point 1 of clause 5.4.11.2.3a states that if the Start-Of-Execution check for a request fails, then no further processing of that request is done. I interpret this to mean that if the Start-Of-Execution check fails, then a TM(1,4) must be generated and no further verification reports for that request are generated (i.e. neither a TM(1,7) nor a TM(1,8) should be generated). Is my interpretation correct?

Question 2

Point 3 of the same clause instead implies that the failure of a Progress-Of-Execution check of a request does not necessarily entail termination of processing of that request. I interpret this to mean that the same telecommand may result in the generation of one or more TM(1,6) and then might still result in the generation of a TM(1,7) indicating successful completion of execution (assuming of course that the sender has requested that successful Completion-Of-Execution be acknowledged). Is my interpretation correct?

Question 3

Would a situation where failure of a Progress-Of-Execution check of a request results in the processing of that request being terminated be compatible with the PUS?

Proposal for Getter and Setter Functions for TM/TC Parameters

We have discussed how to set up the getter and setter functions for the TM/TC parameters. One objective is to make these functions very efficient. Based on various experiments done by Marcel, we think that one good approach is the one described below. Please let us know your feedback.

(a) For each TM/TC packet, a struct is generated which mimics the structure of the packet. Thus, for instance, if we had a packet with four items a, b, c and d of types: char, unsigned integer, array of chars, and unsigned integer, then the following structure would be generated:

    typedef struct __attribute__((packed, aligned(4))) _my_struct_t {
        uint8_t a;
        uint32_t b;
        uint8_t c[3];
        uint32_t d;
    } my_struct_t;

(b) For each parameter in a telecommand packet, a getter function would be generated to access the value of that parameter. In the case of the example, the functions generated for parameter A and B would look like this:

    uint8_t getA(my_struct_t* t)
    {
        return t->a;
    }

    uint32_t getB(my_struct_t* t)
    {
        return t->b;
    }

(c) For each parameter in a telemetry packet, a setter function would be generated to write the value that parameter. Its structure would be similar to that of the getter functions shown above.

(d) For parameters with an array-like structure in a telecommand, two getter functions would be generated: one which returns the array itself and another one which returns the value of the i-th element of the array. By default, no range checking will be implemented (it is therefore up to the user to make sure that the value of i is legal).

(e) For parameters with an array-like structure in a telemetry report, two functions would be generated: one setter function which to set the value of the i-th element of the array (without range-checking) and one getter function to return the array itself.

(f) For efficiency, all the getter and setter functions would be declared as inline functions (same approach as for the getter and setter functions of the data pool).

(f) For each service, a single header file would be generated which declares all the getter and setter functions for the commands and reports in that service.

The advantage of the approach outlined above is efficiency of implementation. Marcel has done some experiments using the code in the attached zip files and he finds that, in an optimal case, a getter or setter function is translated to one single assembler instruction (this is the case where the alignment of the parameter is just "right") whereas in a worst-case where some re-shuffling of data is needed to comply with alignment constraints, a getter/setter function is translated to four assembler instructions. For comparison, the approach which we used for CHEOPS required 11 assembler instructions for each getter/setter function.

The drawback of this approach is that it relies on the (non-standard) __attribute__ compiler directive. This is generally undesirable but could be acceptable if the use of the directive is restricted to the generated code because users can chane the code generator to match their compiler.

test_alignment.zip

Serialization of OutComponents

This issue was first raised as Mantis 369.

I can probably reduce or even eliminate the Serialize operation in OutComponents: attributes can be written directly to the packet without being first stored in intermediate variables in the outCmp data structure.

Packet Parameters in CrFwRepInCmdOutcome Function

In the case of the CHEOPS IASW (see Mantis 1157), the pre-defined parameters of function CrFwRepInCmdOutcome are not sufficient to build the service 1 reports. I should probably pass as a parameter the pointer to the command itself (with the understanding that the command component has to be treated as a read-only object whose pointers will become invalid after the function returns).

Use of Non-Standard Compiler Directives in Framework Code

I notice that the code of the PUS Framework Extension makes extensive use of the attribute compiler directive. This compiler directive is support by gcc but is not part of ANSI C. We should not use it in the manually-generated code because it limits portability!

As you have seen from a separate ticket, we are considering using for the automatically generated code. Even this kind of use needs to be carefully discussed and evaluated but might be acceptable because users have the option of changing the generator. However, the manually generated code cannot be modified by users because it is supposed to be pre-qualified. For this reason, only ANSI C should be used for that part of the code.

NULL Pointer in DoActionB

The following error scenario was encountered during testing for CHEOPS:

  • The InStream has filled its packet queue

  • Packets are still arriving at the middleware

  • Since packets are arriving at the middleware, the Packet Collect Proccedure of the InStream is executed. This procedure is implemented by function DoActionB in module CrFwInStream whch contains the following code:

    while (cmpSpecificData->isPcktAvail(src)) {
    pckt = cmpSpecificData->collectPckt(src);

    if (CrFwPcktGetDest(pckt) == CR_FW_HOST_APP_ID) {
    . . .
    

In this scenario, isPcktAvail returns TRUE but function collectPckt function returns a NULL pointer because no more packets are available in the packet factory of CrFwPckt.h. This leads to a crash.

Setting of Destination Attribute of OutGoing Packet

This issue was first raised as Mantis 235. It may be obsolete by now.

Who sets the Destination attribute of an out-going packet? This could perhaps be done by the OutStream (recall that there is one OutStream for each destination).

Destination of Large Transfer Abortion Report (13,16)

Clause 6.13.4.3.3 (Large packet uplink abortion report) specifies the content of the (13,16) report but it remains silent about its destination. Should this be the ground or should it be the same as the source of the large transfer which is being aborted? I have currently assumed the latter to be the case.

Implementation of Termination Action of (17,3)

The termination action of TC(17,3) is defined as follows in the PUS Extension Specification Document: "Set action outcome to 'success' if the (17,4) report was issued and to 'failure' otherwise". Its current implementation is as follows:

void CrPsTestOnBoardConnectionTerminationAction(FwSmDesc_t __attribute__((unused)) smDesc) 
{
  /* Set action outcome to 'success' */

  CrFwCmpData_t* inData;
 
  printf("CrPsTestOnBoardConnectionTerminationAction()\n");

  FwPrStop(prDescServTestOnBoardConnPrgr); /* TODO: to be investigated, why this is needed */

  inData = (CrFwCmpData_t*)FwSmGetData(smDesc);
  inData->outcome = 1;
 
  return;
}

This implementation is non-compliant to me because it always sets the action outcome to TRUE. Also, there should be no need for the ServTestOnBoardConnPrgr procedure to be stopped. This procedure terminates every time it is run. Hence, stopping it is unnecessary.

Specification of C Language Version

The CORDET User Manual should make it clear that the CORDET FW and the FW Profile are compatible with the ANSI C version of the language.

Embedded State Machines and State Machine Extension

The rules for embedded state machines and state machine extension are not very clear in the FW Profile Definition Document and need to be clarified. I propose to do it as described below.

Let S be a state of state machine SM_A and let SM_B be a state machine derived from SM_A. The rules for adding embedded state machines to SM_B are as follows:

  • If state S is "empty" in SM_A (i.e. it does not have any embedded state machine), then it is allowed to add any embedded state machine in state S of SM_B
  • If state machine SM_E is embedded in state S in SM_A, then, in state S of SM_B, it is allowed to replace SM_E with a state machine derived from SM_E

The basic idea is that, during the state machine extension process, you are allowed either to add new embedded state machine or to replace existing embedded state machines with their children (i..e when you extend a state machine, you can also extend its embedded state machines - see figure below).

smembeddingandderivation

Shadowing of Static Variable

This point was first brought up by ESA when they did their static code analysis of the CORDET Framework code (see e-mail from Roland dated 11 October 2016).

Consider module CrFwOutManager. This module defines a module-wide static variable called outManagerData. Several functions within this module (e.g. function OutManagerExecAction) define a local variable which also has the name outManagerData. The local variable then shadows the module-wide static variable. We should rename the local variable to a name like: outManagerDataLocal.

A similar problem exists for module CrFwInManager where the module-wide static variable inManagerData is shadowed by local variables of the same name. The same correction as in module CrFwOutManager should be done in this module, too.

New Parameter for Function CrFwRepErrInstanceIdAndDest

Function CrFwRepErrInstanceIdAndDest is called by the InLoader component when it encounters a situation where an incoming command or report has been detected which has an invalid destination. With the new PUS, this situation must be handled through the generation of a TM(1,10) report. In order to fill in all parameters for this telemetry report, the packet holding the incoming command or report must be passed to the CrFwRepErrInstanceIdAndDest function as an additional parameter.

Comment about Dead Code in CrFwAux

Feedback from ESA after execution of static code analysis.

Dead code has been found in module CrFwAux. The finding is correct but this module implements a generic consistency check for the framework data structures. Since the check is generic, it may contain dead code for some specific instantiation of the framework. Also, as explained in its docygen documentation, this module is not intended to be included in the final application executable.

To facilitate future runs of the static code analyzer, we will add a comment in the source code of the following kind: // The following can be dead code, depending on the specific instantiation of the FW Profile.

Unnecssary Write

This point was first brought up by ESA when they did their static code analysis of the CORDET Framework code (see e-mail from Roland dated 11 October 2016).

In function CrFwFindCmdRepKindIndex in module CrFwUtilities, an unnecessary write operation has been found: the statement: pos_half = length/2; has no effect because pos_half is updated in the while loop and, if the while loop is not executed, the value of pos_half is never used. This statement should therefore be removed.

Sequence Counter

The old PUS standard stated the following (page 47) concerning the sequence counter for telemetry packets: "A separate source sequence count (SSC) shall be maintained by each APID and shall be incremented by 1 whenever it releases a packet. If the application process can send distinct packets to distinct destinations using the optional Destination ID field shown below, then a separate source sequence count is maintained for each destination". The new standard instead states (page 438): "The packet sequence count (PSC) is used for telemetry packets. It is incremented by 1 whenever the source application process releases a packet". I assume that the PSC in the new standard is intended to fulfill the same role as the SSC in the old standard. However, the two sentences reported above imply that the logic for incrementing these counters has changed (at least in the case where a telemetry packet can go to different destinations). Is this truly the case?

Associating an InStream to a Physical Connection

This issue was initially reported in the CHEOPS project as Mantis 745.

I probably made a mistake when defining the InStream in the framework: I should have associated the InStreams to the physical connections rather than to the sources (the reason for my choice was the fact that the sequence counter is incremented by source rather than by connection).

Perhaps the solution to the problem with the minimal impact on the framework might be to introduce a second kind of InStream which is associated to a physical connection (rather than to a packet source) and which internally manages multiple packet queues and sequence counters (one for each source handled by the connection).

Unclear Clauses 5.4.8b and 7.4.3.1g

Clause 5.4.8b states: "For each subservice and for each capability type defined by the corresponding subservice type, the inclusion of the related capability in that subservice shall comply with the applicability constraints of that capability type". I find this clause simply impenetrable.

Clause 7.4.3.1g states: "For each report that it generates, each application process that provides the capability to count the type of generated messages per destination and report the corresponding message type counter shall set the message type counter of the related telemetry packet to the value of the related counter".

These clauses are unclear and their wording should be improved.

Value of Part Sequence Number Parameter in TC(13,9)

According to clause 8.13.2.4 (TC[13,9] uplink the first part), the command (13,9) carries the "Part Sequence Number" as one of its parameter. I assume that the value of this parameter must be 1 but this does not seem to be stated in the PUS. Or should I perhaps assume that a value of this parameter different from 1 counts as a "discontinuity" in the sense of clause 6.13.4.3.2f?

For reference, the full text of clause 6.13.4.3.2f is: "The receiving entity of the large packet uplink subservice shall abort the uplink operation when a discontinuity is detected in the uplink reception sequence".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.