[Math] Additive Inverse of General Function

linear algebravector-spaces

I am told:

(i) Let $V $ be a real vector space and let $ x ∈ V $. Prove that its additive inverse is
unique.

(ii) Let now V be the vector space of real-valued functions defined on the interval $ [0, 1] $, and let
$ f ∈ V $. What is the additive inverse of $ f $ ?

I've done (I am extremely new to proof writing) for part (i):

Assume $b, c$ are additive inverses of $x$:

$x+b = 0 $

$x+c=0$

$b+0 = b+(x+c) = (b+x)+c \implies 0+c = b \implies c = b$

Using the inverse, zero vector and associativity axioms.

For part $(ii) $:

I am thinking for a real-valued function $f$ defined on $[0, 1]$, it's inverse would simply be $(-1)f$

$f + (-1)f = 0 \implies (1)f + (-1)f = 0 \implies (1-1)f \implies 0f = 0 \implies 0=0$

By the inverse, scalar multiplication and distributivity.

My primary question: have I massively oversimplified things? I see my proof writing needs (much) more formality. I would welcome feedback on that, or not. Thank you.

Best Answer

The proof of $(i)$ looks nice. For $(ii)$ not quite: You are asked to find what is the additive inverse of $f$, or equivalently what is $-f$. Your answer looks like "$-f=-f$", which is a tautology. They are asking you to, given $f$, find $-f$; i.e., find the function $g$ such that $f+g=0$. Now, to define a function you need to specify what is it that the function does with its argument. Can you take it from here?

Also, you write $$f + (-1)f = 0 \implies (1)f + (-1)f = 0 \implies (1-1)f \implies 0f = 0 \implies 0=0$$ but that sequence of implications just proves $0=0$. What you need to do is claim which function is $-f$, and then compute $(f+\text{that function})(x)$ and check that this equals zero for every $x$.