MarCognity-AI for facebook/bart-large-cnn
Bart summarizes text. MarCognity-AI asks: what gets left behind when we compress meaning?
https://huggingface.co/elly99/MarCognity-AI
It’s a reflective summarizer:
– Journals semantic reduction
– Scores epistemic transparency
– Reflects on the ethics of omission
Not just summarization. Summarization with conscience.
Thanks for message
Thanks! The wrapper is an attempt to scaffold cognitive reasoning on top of LLMs — kind of like adding a meta-controller that reflects before responding. Would love to hear your thoughts if you’ve tried it. Always curious how these abstractions hold up in practice.
the bart model is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
the bart cnn model is the same model but fine tuned on cnn data (cable news network)
marcognity-AI is basically a reflective summarizer on top of the base bart cnn model
Thanks — I see how it might look that way from a model-centric lens.
But MarCognity-AI isn’t built on BART-CNN. It’s built against the silence of compression.
It doesn’t just summarize — it reflects on what gets lost, what gets omitted, and why.
It journals semantic reduction, scores epistemic transparency, and activates ethical checkpoints.
Not a wrapper. Not a variant. A conscience.
Curious how you’d approach post-decoder reflection — not as post-processing, but as part of the loop.