Giter Club home page Giter Club logo

spezillm's People

Contributors

adritrao avatar dependabot[bot] avatar nriedman avatar philippzagar avatar pschmiedmayer avatar vishnuravi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

spezillm's Issues

Complete Concurrency Checking

Problem

With Swift 6 approaching in a few months and nightly builds already being available we should ensure that all our packages are working well with all Swift concurrency checks.

Solution

Enable strict concurrency checking in the Swift Package in a PR and ensure that we don't have any warnings remaining in the packages as we develop new features or fix bugs from now.

The UI Testing App target should also enable Enable strict concurrency checking.

The corresponding PR should fix all related warnings when enabling strict concurrency checking.

Additional context

We should consider adding SWIFT_TREAT_WARNINGS_AS_ERRORS = YES to our general workflows to ensure all warnings are flagged as errors during our CI setup.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Edit ChatView/Message UI

Problem

For multiline user input messages, the second subsequent lines are aligned to the right instead of the left. See the image below for reference.
IMG_6583

Solution

Edit the chat UI to make multiline chats align to the left.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Hugging Face Inference API Support

Problem

Similar to our support for OpenAI models, it would be great if SpeziML can support a wider variety of models such as the ones hosted on Hugging Face.

Solution

Integrating a light wrapper around the Hugging Face hosted Inference API would be a great way to, e.g., interact with different LLMs hosted on Hugging Face.

More details about the Hugging Face API can be found here: https://huggingface.co/docs/api-inference/index

In order to abstract different API types in an LLM-agnostic manner we should add a general protocol structure that both the Hugging Face and the Open AI module conform to. This would allow developers to easily transition between different providers.
The Swift protocol structure should ideally be fitting to different model types, starting with typical LLM interactions as this is the main use case of our OpenAI component at this point.

Additional context

Initial focus should probably be on the Test Generation task: https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task. We should also investigate if there are already any suiting Swift Packages that abstract the Hugging Face API out there.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Native visionOS Support

Problem

With a few projects using visionOS and testing on the Apple Vision Pro, we want to move all Spezi modules in the Stanford Spezi organization to adopt the native UI components of visionOS.

Solution

Update all UI components to use native SwiftUI components on visionOS. We also need visionOS as a supported platform for the Swift package file, and we may update the Swift tools version to 5.9.

You can learn more about visionOS and SwiftUI for visionOS at https://developer.apple.com/visionos/learn/.

All UI changes should be as cross-platform as possible. Increasing the iOS minimum platform target is acceptable.

Feel free to use comments under this issue to discuss the best way to approach the adoption of visionOS for the Spezi module.

Additional context

It is required to install the latest Xcode 15 beta to address this issue.

Unfortunately, UI tests are currently not supported for visionOS.
The StanfordBDHG/SwiftPackageTemplate demonstrates the CI setup for visionOS that should also be adopted for this Spezi module.

This change will be considered a breaking change.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Add `Binding<Bool>` to the `ChatView` to Display a "Three Dot Typing" Indicator

Problem

It would be great to demonstrate progress to the user when they have sent a message to the LLM and the LLM started processing without already providing a first word. This is commonly indicated using a three dot typing indicator.

Solution

The ChatView view should get an optional Bool Binding that can be used to control the display of a typing indicator.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Update Documentation in Conformance to the Documentation Guide

Problem

The current documentation in the module provides a good overview of the API and includes documentation for most public APIs. In line with the newly published Stanford Spezi Documentation Guide, we should update the documentation in accordance with the guidelines.

Solution

The documentation should be updated to provide more insightful inline documentation, improve the README file and the DocC landing page in conformance with the Stanford Spezi Documentation Guide.

  • Update inline documentation, including links and other elements noted in the Code Documentation section.
  • Improve the DocC landing page as detailed in the Landing Page section.
  • Add a graphic and visual representation of the module user interface (UI) or UML diagram of the architecture or interacting with the module API.
  • Improve the READMe as noted in the README section.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Add support for GPT-4o

Problem

We would like to add support for the new GPT-4o model from OpenAI to SpeziLLM.

Solution

Since SpeziLLM depends on the OpenAI package to interact with OpenAI services, we will need to first update this package to support GPT-4o. A pull request has been created for this: StanfordBDHG/OpenAI#3

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Utilize AsyncSemaphore from SpeziFoundation

Problem

As of now, SpeziLLM depends on AsyncSemaphore from an external SPM package. However, we plan to implement an AsyncSemaphore within SpeziFoundation.

Solution

Using the AsyncSemaphore from SpeziFoundation to have a unified semaphore used within the Spezi ecosystem.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

watchOS Support & AppleWatch User Interface Adjustments

Problem

The Spezi module should support watchOS to enable developers to build a watchOS-based application.

The current UI elements in the Spezi module only support iOS. As we want to support the Apple Watch, we must ensure that the UI controls are adapted to the smaller screen size and can be navigated on the Apple Watch.

Solution

Update all UI components to use native SwiftUI components on watchOS. We also need to add watchOS 10 as a supported platform in the Swift package file and may update the Swift tools version to 5.9.

The watchOS UI components should support the watchOS 10 UI design paradigms.
You can learn more about watchOS 10 in the following WWDC 2023 videos:

All UI changes should be as cross-platform as possible. Increasing the iOS minimum platform target is acceptable.

Additional context

Feel free to use comments under this issue to discuss the best way to approach the adoption of watchOS for the Spezi module.

It is required to install the latest Xcode 15 beta to address this issue.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

iPadOS Support & Large Screen Improvements

Problem

The current UI elements in the Spezi module mainly apply to iPhone screen sizes. As we want to support larger screen sizes like the iPad, we must ensure that the UI controls look good on a large screen and include this in the automated testing flow.

Solution

Add support for larger screen sizes, e.g., following the Apple Designing for iPadOS guidelines and WWDC 2022 - What's new in iPad app design.

Additional context

Please use this issue as a discussion point about how to improve the UI for iPadOS.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Create an `OpenAIChatView`

Problem

A common use case of the SpeziML module is to create a chat interface with a LLM model that is initialized with a first conversation (instance of [Chat] while Chat is not the best name as noted in #16).

From there on, the user can respond to the messages and the whole interaction is handle by the LLM-specific component. For the OpenAI use case this would be the OpenAIComponent.

Solution

It would be good to create an OpenAIChatView that abstract the functionality into a dedicated View while enabling access to the [Chat] instance and the other bindings (e.g., also introduced ones in #15) if the developer wants to.

The InspectResourceChat view in the LLMonFHIR already provides a nearly complete implementation of the feature.

It would be good for the view to support the functionality of creating the initial prompt ([Chat]) instance using an environment object and maybe even allow an async initialization providing a default message if the [Chat] instance is completely empty.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Add Support for the Function Calling API for OpenAI Models

Problem

Some conversations require additional context that can be provided to a an LLM to provide additional context to the conversation.

Solution

We can use the function calling API in the OpenAI interface to enable a structured interaction with the LLM to request additional data and provide it back to the conversation flow.

Additional context

Feel free to use this discussion to discuss the API of the functionality and how to easily integrate that functionality in SpeziML

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

CoreML Model Download and Execution Functionality

Problem

A lot of different models such as LLMs are large models, even when transformed into potentially on-device executable versions using CoreML. It is impractical to ship these models with a mobile application. Even when we download and abstract these models it requires some UI and progress indicators to communicated the implications to the user.

Solution

All building blocks to create a good integration into SpeziML are in place.

  1. Apple CoreML already provides the functionality to downloading and compiling a model on the user’s device: https://developer.apple.com/documentation/coreml/downloading_and_compiling_a_model_on_the_user_s_device
  2. Hugging Face hosts CoreML models in their data repository what we would need to download, e.g. Llama 2: https://huggingface.co/pcuenq/Llama-2-7b-chat-coreml
  3. We can use SwiftUI to create a nice downloading progress API that track the progress of downloading the model and making it ready to be executed.

Similar to #18, we should add some sort of abstraction layer to the API to enable a reuse across different models, maybe initially focusing on the Hugging Face and LLM use case.
Testing this functionality is probably best done on a macOS machine. This might require some smaller changes to the framework.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Use OpenAPI generator from the OpenAI API spec

Use Case

Up-to-date, maintainable and properly versioned OpenAI API library for Swift that is utilized by SpeziLLM

Problem

As of now, we're using the OpenAI package from MacPaw for SpeziLLM: https://github.com/MacPaw/OpenAI
This package definitely is useable, however, it doesn't include the most up-to-date functionality of the OpenAI API, the code is sadly poorly maintained with little to no documentation, and there are lots of inefficiencies throughout the codebase.

Solution

We use the official OpenAPI spec from OpenAI together with the https://github.com/apple/swift-openapi-generator to generate the Swift client-code for the OpenAI API.
One needs to build proper wrappers on top of the generated code pieces, but it is definitely a good start.

Alternatives considered

Using other OpenAI API Swift packages like: https://github.com/OpenDive/OpenAIKit or https://github.com/adamrushy/OpenAISwift or https://github.com/SwiftBeta/SwiftOpenAI
Sadly, most of the packages suffer from similar problems as the MacPaw OpenAI library, as described above.

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Use String Catalogues

Problem

The current String.localized files are challenging to keep up to date and do not scale well if we add support for multiple languages.

Solution

Apple introduced and embedded String Catalogs in Xcode 15 and with Swift 5.9. The following WWDC session and documentation provides a good overview:

The seem to be supported for Swift Packages: https://developer.apple.com/forums/thread/731941

Additional context

This issue is part of the Use String Catalogues focus area.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Abstract the `ChatView` into a Separate Swift Package Target

Problem

The ChatView and related views should not be as closely bundled to the OpenAI module. They should be hosted in a separate target.

Solution

The current types like Chat and other elements from the OpenAI dependency should replaced with protocols that abstract the concrete implementations to the using module.
We should consider better names then Chat for an individual message.

The new module should then be used in then OpenAI module to enable a chat interface.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

More sophisticated LLM execution methodology

Problem

Currently, our system throws an error when a model is not ready to process tasks. This approach may not be optimal, as it lacks flexibility in managing different model states and task executions. For instance:

Remote models capable of handling multiple tasks concurrently are restricted by the current implementation. They are always set to a 'ready' state without indicating active or multiple ongoing requests.
Local LLMs require a different approach, such as enqueuing tasks (with cancelation options) and executing them sequentially. However, the current actor mechanism in Swift does not guarantee sequential execution, especially when a model leaves the actor context for asynchronous operations, as outlined in WWDC21 - Meet async/await in Swift.

Solution

We should explore and implement a more flexible and efficient mechanism for handling tasks when models are not ready. The solution should address:

  • Remote Models: Implementing a mechanism to allow remote models to process multiple tasks simultaneously while providing a clear indication of active requests. This may involve modifying the state management to differentiate between 'processing' and 'ready' states.
  • Local Models: Developing a task queue system for local models where tasks can be enqueued, canceled if necessary, and executed in order. This system should ensure sequential processing even when asynchronous calls are made outside the actor context.

Additional context

Finding a solution to this issue is crucial for improving the robustness and scalability of our system. It will allow different types of models to operate more effectively within their capabilities and constraints. This improvement is expected to enhance the overall performance and user experience of our system.

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.