A parse tree represents the structural construction of a sentence with respect to the grammar of the language in question.
For example, we could construct a toy grammar for the English language using the subject-predicate-object structure, as
Sentence ::= NP VP | NP VP NP NP ::= Noun | DT Noun VP ::= Verb
using the conventional names of NP for Noun Phrases, VP for Verb Phrases and DT for Determiners like the or an.
Using this grammar, we can describe sentences like John sleeps or the dog eats the cake. For John sleeps, we can use the first rule, which states that a Sentence can be an NP followed by a VP. Using the third rule, an NP can be just a noun, such as John, and similarly, according to the fifth rule, a VP can consist of simply a verb, such as sleeps. Because Sentence is split into NP and VP, which are then further specialized into Noun and Verb respectively, it makes sense to draw this derivation as a tree where each grammatical entity is connected to the entity it is derived from. The parse tree for the example sentence John sleeps would thus be:
Sentence / \ NP VP | | Noun Verb | | John sleeps
As a more interesting example, let us consider the sentence Tom eats the mouse (with Tom being a cat). Here, the second rule is used, decomposing a Sentence into an NP (the subject), a VP (the predicate) and another NP (the object). Clearly, both the first NP and the VP again simply derive a noun (Tom) and a verb (eats) using the third and fifth rule respectively, however the second NP uses the fourth rule to further split into a determiner (the) and a noun (mouse). The parse tree thus shows the second NP decomposing into DT and Noun:
Sentence / | \ NP VP NP | | / \ Noun Verb DT Noun | | | | Tom eats the mouse