| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Now the API itself is responsible for querying settings. This makes
sense, as it's an internal part of the component.
|
|
|
|
|
|
|
|
|
| |
There's no preferences dialog, so you can't really adjust the prompt
or the model it uses. The default settings work well for me. You may
want to tweak them depending on your model preferences and compute
budget. (Not many can afford to run Llama3-8B at high
quantization. Conversely, you might have a better GPU than me and wish
to run a 27B model or bigger.)
|
|
|
|
|
|
| |
This is a very shitty translation, but it can be improved later. I
added it mostly as a test for translations working correctly, since I
know Russian and might as well translate the app into the language.
|
|
|
|
|
|
|
| |
What this command should do is construct a summarization request and
return a future which would return chunks from the LLM.
Perhaps this component will be asyncified in the future.
|
|
|
|
|
| |
On receving `smart_summary::Output::Start`, one must reply with
`smart_summary::Input::Text(text)` to start the actual summarization.
|
|
This is a little bit janky in my opinion, because it takes a reference
to the buffer which contents its gonna be summarizing. In a perfect
world, it would ask the parent component for the text.
|