context
stringlengths 5k
202k
| question
stringlengths 17
163
| answer
stringlengths 1
619
⌀ | length
int64 819
31.7k
| dataset
stringclasses 5
values | context_range
stringclasses 5
values |
|---|---|---|---|---|---|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
Where does the witch live?
|
The Atlas Mountains
| 5,397
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
What was the purpose of Crito's visit?
|
To smuggle Socrates out of prison and into a life of exile.
| 6,592
|
narrativeqa
|
8k
|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
Who did the Witch want to have reveal their own lies?
|
The scribe.
| 5,403
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
How long after Madame de Merret dies before people are allowed inter manor?
|
50 years
| 8,147
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
Who doe the Vervelle couple believe Grassou is the perfect match for?
|
Their daughter, Virgine.
| 7,903
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
Who does Vervelle want his daughter to marry?
|
Grassou
| 7,899
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
How many ethical arguments does Socrates propose?
|
Two
| 6,592
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
Who does Socrates compare going against the law to?
|
Your parents
| 6,594
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
Why does Socrates ultimately decide it is unjust to escape?
|
Because it violates his belief in the laws of society.
| 6,595
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
Where does this story take place?
|
In Socrates Cell?
| 6,591
|
narrativeqa
|
8k
|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
Who does the Witch create?
|
Hermaphroditus.
| 5,397
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
Where did the lover hide?
|
In a closet.
| 8,139
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
What was Grassou's wife's name?
|
Virginie.
| 7,896
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
What does a citizen willingly do if they agree to live in Athens?
|
Comply with the laws
| 6,598
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
What did Madam de Merret and her husband do after walling off the closet?
|
They stayed in the bedroom for a few days with the sounds of her lover trapped.
| 8,148
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
What would Socrates turn into if he agreed to break out of prison?
|
He would turn into an outlaw.
| 6,598
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
What does Socrates believe about being judged in the after life if he breaks out of prison?
|
It would have a negative effect
| 6,602
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
Madame de Merret threatened to leave her husband if he did what?
|
Opened the closet
| 8,146
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
How does the art-dealer describe Grassou's skill level to Virginie's father?
|
A grand master.
| 7,902
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
What is Grassous biggest disappointment?
|
He doesn't feel he is a true artist.
| 7,896
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
Where did the lover hide?
|
In the closet
| 8,139
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
Why is Grassou still resentful, despite his potentially advantageous marriage?
|
He feels as the he is still not a real artist.
| 7,901
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
What contract would be broken if he had escaped?
|
The social contract.
| 6,594
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
How does the art-dealer describe Grassou's skill level to Virginie's father?
|
he calls Grassou a grand master
| 7,902
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
Who does Monsieur Vervelle want his child to wed?
|
Pierre Grassou.
| 7,900
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
What did Madam de Merret's husband discover about her?
|
That she was having an affair.
| 8,143
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
What is La Grande Breteche?
|
The ruins of an abandoned manor.
| 8,139
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
When invited to Vervelle's home, what does Grassou discover?
|
A number of his own forgeries.
| 7,900
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
What analogy does socrates compare citizens and the law to?
|
Citizens are bound to laws the same as children are bound to parents.
| 6,595
|
narrativeqa
|
8k
|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
What did the Witch throw into a ditch?
|
A coffin.
| 5,400
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
Who is the owner of the manor in the story?
|
Madame de Merret
| 8,144
|
narrativeqa
|
8k
|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
Who is the Bull god?
|
Apis
| 5,397
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
Who does Grassou paint the forgeries for?
|
The greats
| 7,898
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
What is the fatal thought that Grassou is believed unable to remove from his heart?
|
That artists laugh at and ridicule his work.
| 7,906
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
What is La Grande Breteche?
|
Abandon manor
| 8,139
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
Where is the abandoned manor located?
|
Near the town Vendome
| 8,140
|
narrativeqa
|
8k
|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
What did the less beautiful receive from the Witch?
|
Strange dreams.
| 5,401
|
narrativeqa
|
8k
|
Produced by John Bickers and Dagny
PIERRE GRASSOU
By Honore De Balzac
Translated by Katharine Prescott Wormeley
Dedication
To The Lieutenant-Colonel of Artillery, Periollas, As a Testimony of the
Affectionate Esteem of the Author,
De Balzac
PIERRE GRASSOU
Whenever you have gone to take a serious look at the exhibition of works
of sculpture and painting, such as it has been since the revolution
of 1830, have you not been seized by a sense of uneasiness, weariness,
sadness, at the sight of those long and over-crowded galleries? Since
1830, the true Salon no longer exists. The Louvre has again been taken
by assault,--this time by a populace of artists who have maintained
themselves in it.
In other days, when the Salon presented only the choicest works of art,
it conferred the highest honor on the creations there exhibited. Among
the two hundred selected paintings, the public could still choose: a
crown was awarded to the masterpiece by hands unseen. Eager, impassioned
discussions arose about some picture. The abuse showered on Delacroix,
on Ingres, contributed no less to their fame than the praises and
fanaticism of their adherents. To-day, neither the crowd nor the
criticism grows impassioned about the products of that bazaar. Forced to
make the selection for itself, which in former days the examining
jury made for it, the attention of the public is soon wearied and the
exhibition closes. Before the year 1817 the pictures admitted never went
beyond the first two columns of the long gallery of the old masters; but
in that year, to the great astonishment of the public, they filled the
whole space. Historical, high-art, genre paintings, easel pictures,
landscapes, flowers, animals, and water-colors,--these eight specialties
could surely not offer more than twenty pictures in one year worthy of
the eyes of the public, which, indeed, cannot give its attention to a
greater number of such works. The more the number of artists increases,
the more careful and exacting the jury of admission ought to be.
The true character of the Salon was lost as soon as it spread along
the galleries. The Salon should have remained within fixed limits of
inflexible proportions, where each distinct specialty could show its
masterpieces only. An experience of ten years has shown the excellence
of the former institution. Now, instead of a tournament, we have a mob;
instead of a noble exhibition, we have a tumultuous bazaar; instead of
a choice selection we have a chaotic mass. What is the result? A great
artist is swamped. Decamps' "Turkish Cafe," "Children at a Fountain,"
"Joseph," and "The Torture," would have redounded far more to his credit
if the four pictures had been exhibited in the great Salon with the
hundred good pictures of that year, than his twenty pictures could,
among three thousand others, jumbled together in six galleries.
By some strange contradiction, ever since the doors are open to every
one there has been much talk of unknown and unrecognized genius. When,
twelve years earlier, Ingres' "Courtesan," and that of Sigalon, the
"Medusa" of Gericault, the "Massacre of Scio" by Delacroix, the "Baptism
of Henri IV." by Eugene Deveria, admitted by celebrated artists accused
of jealousy, showed the world, in spite of the denials of criticism,
that young and vigorous palettes existed, no such complaint was made.
Now, when the veriest dauber of canvas can send in his work, the whole
talk is of genius neglected! Where judgment no longer exists, there is
no longer anything judged. But whatever artists may be doing now, they
will come back in time to the examination and selection which presents
their works to the admiration of the crowd for whom they work. Without
selection by the Academy there will be no Salon, and without the Salon
art may perish.
Ever since the catalogue has grown into a book, many names have appeared
in it which still remain in their native obscurity, in spite of the ten
or a dozen pictures attached to them. Among these names perhaps the most
unknown to fame is that of an artist named Pierre Grassou, coming from
Fougeres, and called simply "Fougeres" among his brother-artists, who,
at the present moment holds a place, as the saying is, "in the sun," and
who suggested the rather bitter reflections by which this sketch of
his life is introduced,--reflections that are applicable to many other
individuals of the tribe of artists.
In 1832, Fougeres lived in the rue de Navarin, on the fourth floor of
one of those tall, narrow houses which resemble the obelisk of Luxor,
and possess an alley, a dark little stairway with dangerous turnings,
three windows only on each floor, and, within the building, a courtyard,
or, to speak more correctly, a square pit or well. Above the three or
four rooms occupied by Grassou of Fougeres was his studio, looking over
to Montmartre. This studio was painted in brick-color, for a background;
the floor was tinted brown and well frotted; each chair was furnished
with a bit of carpet bound round the edges; the sofa, simple enough, was
clean as that in the bedroom of some worthy bourgeoise. All these things
denoted the tidy ways of a small mind and the thrift of a poor man. A
bureau was there, in which to put away the studio implements, a table
for breakfast, a sideboard, a secretary; in short, all the articles
necessary to a painter, neatly arranged and very clean. The stove
participated in this Dutch cleanliness, which was all the more visible
because the pure and little changing light from the north flooded with
its cold clear beams the vast apartment. Fougeres, being merely a genre
painter, does not need the immense machinery and outfit which ruin
historical painters; he has never recognized within himself sufficient
faculty to attempt high-art, and he therefore clings to easel painting.
At the beginning of the month of December of that year, a season at
which the bourgeois of Paris conceive, periodically, the burlesque idea
of perpetuating their forms and figures already too bulky in themselves,
Pierre Grassou, who had risen early, prepared his palette, and lighted
his stove, was eating a roll steeped in milk, and waiting till the frost
on his windows had melted sufficiently to let the full light in. The
weather was fine and dry. At this moment the artist, who ate his bread
with that patient, resigned air that tells so much, heard and recognized
the step of a man who had upon his life the influence such men have
on the lives of nearly all artists,--the step of Elie Magus, a
picture-dealer, a usurer in canvas. The next moment Elie Magus entered
and found the painter in the act of beginning his work in the tidy
studio.
"How are you, old rascal?" said the painter.
Fougeres had the cross of the Legion of honor, and Elie Magus bought his
pictures at two and three hundred francs apiece, so he gave himself the
airs of a fine artist.
"Business is very bad," replied Elie. "You artists have such
pretensions! You talk of two hundred francs when you haven't put six
sous' worth of color on a canvas. However, you are a good fellow, I'll
say that. You are steady; and I've come to put a good bit of business in
your way."
"Timeo Danaos et dona ferentes," said Fougeres. "Do you know Latin?"
"No."
"Well, it means that the Greeks never proposed a good bit of business
to the Trojans without getting their fair share of it. In the olden time
they used to say, 'Take my horse.' Now we say, 'Take my bear.' Well,
what do you want, Ulysses-Lagingeole-Elie Magus?"
These words will give an idea of the mildness and wit with which
Fougeres employed what painters call studio fun.
"Well, I don't deny that you are to paint me two pictures for nothing."
"Oh! oh!"
"I'll leave you to do it, or not; I don't ask it. But you're an honest
man."
"Come, out with it!"
"Well, I'm prepared to bring you a father, mother, and only daughter."
"All for me?"
"Yes--they want their portraits taken. These bourgeois--they are crazy
about art--have never dared to enter a studio. The girl has a 'dot' of a
hundred thousand francs. You can paint all three,--perhaps they'll turn
out family portraits."
And with that the old Dutch log of wood who passed for a man and who was
called Elie Magus, interrupted himself to laugh an uncanny laugh which
frightened the painter. He fancied he heard Mephistopheles talking
marriage.
"Portraits bring five hundred francs apiece," went on Elie; "so you can
very well afford to paint me three pictures."
"True for you!" cried Fougeres, gleefully.
"And if you marry the girl, you won't forget me."
"Marry! I?" cried Pierre Grassou,--"I, who have a habit of sleeping
alone; and get up at cock-crow, and all my life arranged--"
"One hundred thousand francs," said Magus, "and a quiet girl, full of
golden tones, as you call 'em, like a Titian."
"What class of people are they?"
"Retired merchants; just now in love with art; have a country-house at
Ville d'Avray, and ten or twelve thousand francs a year."
"What business did they do?"
"Bottles."
"Now don't say that word; it makes me think of corks and sets my teeth
on edge."
"Am I to bring them?"
"Three portraits--I could put them in the Salon; I might go in for
portrait-painting. Well, yes!"
Old Elie descended the staircase to go in search of the Vervelle family.
To know to what extend this proposition would act upon the painter, and
what effect would be produced upon him by the Sieur and Dame Vervelle,
adorned by their only daughter, it is necessary to cast an eye on the
anterior life of Pierre Grassou of Fougeres.
When a pupil, Fougeres had studied drawing with Servin, who was
thought a great draughtsman in academic circles. After that he went to
Schinner's, to learn the secrets of the powerful and magnificent color
which distinguishes that master. Master and scholars were all discreet;
at any rate Pierre discovered none of their secrets. From there he went
to Sommervieux' atelier, to acquire that portion of the art of painting
which is called composition, but composition was shy and distant to him.
Then he tried to snatch from Decamps and Granet the mystery of their
interior effects. The two masters were not robbed. Finally Fougeres
ended his education with Duval-Lecamus. During these studied and
these different transformations Fougeres' habits and ways of life were
tranquil and moral to a degree that furnished matter of jesting to the
various ateliers where he sojourned; but everywhere he disarmed his
comrades by his modesty and by the patience and gentleness of a lamblike
nature. The masters, however, had no sympathy for the good lad; masters
prefer bright fellows, eccentric spirits, droll or fiery, or else gloomy
and deeply reflective, which argue future talent. Everything about
Pierre Grassou smacked of mediocrity. His nickname "Fougeres" (that
of the painter in the play of "The Eglantine") was the source of much
teasing; but, by force of circumstances, he accepted the name of the
town in which he had first seen light.
Grassou of Fougeres resembled his name. Plump and of medium height, he
had a dull complexion, brown eyes, black hair, a turned-up nose, rather
wide mouth, and long ears. His gentle, passive, and resigned air gave a
certain relief to these leading features of a physiognomy that was full
of health, but wanting in action. This young man, born to be a virtuous
bourgeois, having left his native place and come to Paris to be clerk
with a color-merchant (formerly of Mayenne and a distant connection of
the Orgemonts) made himself a painter simply by the fact of an obstinacy
which constitutes the Breton character. What he suffered, the manner in
which he lived during those years of study, God only knows. He suffered
as much as great men suffer when they are hounded by poverty and hunted
like wild beasts by the pack of commonplace minds and by troops of
vanities athirst for vengeance.
As soon as he thought himself able to fly on his own wings, Fougeres
took a studio in the upper part of the rue des Martyrs, where he began
to delve his way. He made his first appearance in 1819. The first
picture he presented to the jury of the Exhibition at the Louvre
represented a village wedding rather laboriously copied from Greuze's
picture. It was rejected. When Fougeres heard of the fatal decision,
he did not fall into one of those fits of epileptic self-love to which
strong natures give themselves up, and which sometimes end in challenges
sent to the director or the secretary of the Museum, or even by threats
of assassination. Fougeres quietly fetched his canvas, wrapped it in
a handkerchief, and brought it home, vowing in his heart that he would
still make himself a great painter. He placed his picture on the easel,
and went to one of his former masters, a man of immense talent,--to
Schinner, a kind and patient artist, whose triumph at that year's Salon
was complete. Fougeres asked him to come and criticise the rejected
work. The great painter left everything and went at once. When poor
Fougeres had placed the work before him Schinner, after a glance,
pressed Fougeres' hand.
"You are a fine fellow," he said; "you've a heart of gold, and I must
not deceive you. Listen; you are fulfilling all the promises you made in
the studios. When you find such things as that at the tip of your brush,
my good Fougeres, you had better leave colors with Brullon, and not take
the canvas of others. Go home early, put on your cotton night-cap, and
be in bed by nine o'clock. The next morning early go to some government
office, ask for a place, and give up art."
"My dear friend," said Fougeres, "my picture is already condemned; it is
not a verdict that I want of you, but the cause of that verdict."
"Well--you paint gray and sombre; you see nature being a crape veil;
your drawing is heavy, pasty; your composition is a medley of Greuze,
who only redeemed his defects by the qualities which you lack."
While detailing these faults of the picture Schinner saw on Fougeres'
face so deep an expression of sadness that he carried him off to dinner
and tried to console him. The next morning at seven o'clock Fougeres was
at his easel working over the rejected picture; he warmed the colors; he
made the corrections suggested by Schinner, he touched up his figures.
Then, disgusted with such patching, he carried the picture to Elie
Magus. Elie Magus, a sort of Dutch-Flemish-Belgian, had three reasons
for being what he became,--rich and avaricious. Coming last from
Bordeaux, he was just starting in Paris, selling old pictures and living
on the boulevard Bonne-Nouvelle. Fougeres, who relied on his palette
to go to the baker's, bravely ate bread and nuts, or bread and milk, or
bread and cherries, or bread and cheese, according to the seasons. Elie
Magus, to whom Pierre offered his first picture, eyed it for some time
and then gave him fifteen francs.
"With fifteen francs a year coming in, and a thousand francs for
expenses," said Fougeres, smiling, "a man will go fast and far."
Elie Magus made a gesture; he bit his thumbs, thinking that he might
have had that picture for five francs.
For several days Pierre walked down from the rue des Martyrs and
stationed himself at the corner of the boulevard opposite to Elie's
shop, whence his eye could rest upon his picture, which did not obtain
any notice from the eyes of the passers along the street. At the end of
a week the picture disappeared; Fougeres walked slowly up and approached
the dealer's shop in a lounging manner. The Jew was at his door.
"Well, I see you have sold my picture."
"No, here it is," said Magus; "I've framed it, to show it to some one
who fancies he knows about painting."
Fougeres had not the heart to return to the boulevard. He set about
another picture, and spent two months upon it,--eating mouse's meals and
working like a galley-slave.
One evening he went to the boulevard, his feet leading him fatefully to
the dealer's shop. His picture was not to be seen.
"I've sold your picture," said Elie Magus, seeing him.
"For how much?"
"I got back what I gave and a small interest. Make me some Flemish
interiors, a lesson of anatomy, landscapes, and such like, and I'll buy
them of you," said Elie.
Fougeres would fain have taken old Magus in his arms; he regarded him as
a father. He went home with joy in his heart; the great painter Schinner
was mistaken after all! In that immense city of Paris there were some
hearts that beat in unison with Pierre's; his talent was understood and
appreciated. The poor fellow of twenty-seven had the innocence of a lad
of sixteen. Another man, one of those distrustful, surly artists, would
have noticed the diabolical look on Elie's face and seen the twitching
of the hairs of his beard, the irony of his moustache, and the movement
of his shoulders which betrayed the satisfaction of Walter Scott's Jew
in swindling a Christian.
Fougeres marched along the boulevard in a state of joy which gave to his
honest face an expression of pride. He was like a schoolboy protecting
a woman. He met Joseph Bridau, one of his comrades, and one of those
eccentric geniuses destined to fame and sorrow. Joseph Bridau, who had,
to use his own expression, a few sous in his pocket, took Fougeres to
the Opera. But Fougeres didn't see the ballet, didn't hear the music; he
was imagining pictures, he was painting. He left Joseph in the middle
of the evening, and ran home to make sketches by lamp-light. He invented
thirty pictures, all reminiscence, and felt himself a man of genius. The
next day he bought colors, and canvases of various dimensions; he piled
up bread and cheese on his table, he filled a water-pot with water,
he laid in a provision of wood for his stove; then, to use a studio
expression, he dug at his pictures. He hired several models and Magus
lent him stuffs.
After two months' seclusion the Breton had finished four pictures. Again
he asked counsel of Schinner, this time adding Bridau to the invitation.
The two painters saw in three of these pictures a servile imitation
of Dutch landscapes and interiors by Metzu, in the fourth a copy of
Rembrandt's "Lesson of Anatomy."
"Still imitating!" said Schinner. "Ah! Fougeres can't manage to be
original."
"You ought to do something else than painting," said Bridau.
"What?" asked Fougeres.
"Fling yourself into literature."
Fougeres lowered his head like a sheep when it rains. Then he asked and
obtained certain useful advice, and retouched his pictures before taking
them to Elie Magus. Elie paid him twenty-five francs apiece. At that
price of course Fougeres earned nothing; neither did he lose, thanks to
his sober living. He made a few excursions to the boulevard to see what
became of his pictures, and there he underwent a singular hallucination.
His neat, clean paintings, hard as tin and shiny as porcelain, were
covered with a sort of mist; they looked like old daubs. Magus was out,
and Pierre could obtain no information on this phenomenon. He fancied
something was wrong with his eyes.
The painter went back to his studio and made more pictures. After seven
years of continued toil Fougeres managed to compose and execute quite
passable work. He did as well as any artist of the second class.
Elie bought and sold all the paintings of the poor Breton, who earned
laboriously about two thousand francs a year while he spent but twelve
hundred.
At the Exhibition of 1829, Leon de Lora, Schinner, and Bridau, who all
three occupied a great position and were, in fact, at the head of the
art movement, were filled with pity for the perseverance and the poverty
of their old friend; and they caused to be admitted into the grand salon
of the Exhibition, a picture by Fougeres. This picture, powerful in
interest but derived from Vigneron as to sentiment and from Dubufe's
first manner as to execution, represented a young man in prison, whose
hair was being cut around the nape of the neck. On one side was
a priest, on the other two women, one old, one young, in tears. A
sheriff's clerk was reading aloud a document. On a wretched table was a
meal, untouched. The light came in through the bars of a window near
the ceiling. It was a picture fit to make the bourgeois shudder, and
the bourgeois shuddered. Fougeres had simply been inspired by the
masterpiece of Gerard Douw; he had turned the group of the "Dropsical
Woman" toward the window, instead of presenting it full front. The
condemned man was substituted for the dying woman--same pallor, same
glance, same appeal to God. Instead of the Dutch doctor, he had painted
the cold, official figure of the sheriff's clerk attired in black; but
he had added an old woman to the young one of Gerard Douw. The cruelly
simple and good-humored face of the executioner completed and dominated
the group. This plagiarism, very cleverly disguised, was not discovered.
The catalogue contained the following:--
510. Grassou de Fougeres (Pierre), rue de Navarin, 2.
Death-toilet of a Chouan, condemned to execution in 1809.
Though wholly second-rate, the picture had immense success, for it
recalled the affair of the "chauffeurs," of Mortagne. A crowd collected
every day before the now fashionable canvas; even Charles X. paused to
look at it. "Madame," being told of the patient life of the poor Breton,
became enthusiastic over him. The Duc d'Orleans asked the price of
the picture. The clergy told Madame la Dauphine that the subject was
suggestive of good thoughts; and there was, in truth, a most satisfying
religious tone about it. Monseigneur the Dauphin admired the dust on
the stone-floor,--a huge blunder, by the way, for Fougeres had painted
greenish tones suggestive of mildew along the base of the walls.
"Madame" finally bought the picture for a thousand francs, and the
Dauphin ordered another like it. Charles X. gave the cross of the Legion
of honor to this son of a peasant who had fought for the royal cause
in 1799. (Joseph Bridau, the great painter, was not yet decorated.) The
minister of the Interior ordered two church pictures of Fougeres.
This Salon of 1829 was to Pierre Grassou his whole fortune, fame,
future, and life. Be original, invent, and you die by inches; copy,
imitate, and you'll live. After this discovery of a gold mine, Grassou
de Fougeres obtained his benefit of the fatal principle to which society
owes the wretched mediocrities to whom are intrusted in these days the
election of leaders in all social classes; who proceed, naturally, to
elect themselves and who wage a bitter war against all true talent. The
principle of election applied indiscriminately is false, and France will
some day abandon it.
Nevertheless the modesty, simplicity, and genuine surprise of the good
and gentle Fougeres silenced all envy and all recriminations. Besides,
he had on his side all of his clan who had succeeded, and all who
expected to succeed. Some persons, touched by the persistent energy of a
man whom nothing had discouraged, talked of Domenichino and said:--
"Perseverance in the arts should be rewarded. Grassou hasn't stolen his
successes; he has delved for ten years, the poor dear man!"
That exclamation of "poor dear man!" counted for half in the support
and the congratulations which the painter received. Pity sets up
mediocrities as envy pulls down great talents, and in equal numbers.
The newspapers, it is true, did not spare criticism, but the chevalier
Fougeres digested them as he had digested the counsel of his friends,
with angelic patience.
Possessing, by this time, fifteen thousand francs, laboriously earned,
he furnished an apartment and studio in the rue de Navarin, and painted
the picture ordered by Monseigneur the Dauphin, also the two church
pictures, and delivered them at the time agreed on, with a punctuality
that was very discomforting to the exchequer of the ministry, accustomed
to a different course of action. But--admire the good fortune of men who
are methodical--if Grassou, belated with his work, had been caught by
the revolution of July he would not have got his money.
By the time he was thirty-seven Fougeres had manufactured for Elie Magus
some two hundred pictures, all of them utterly unknown, by the help of
which he had attained to that satisfying manner, that point of execution
before which the true artist shrugs his shoulders and the bourgeoisie
worships. Fougeres was dear to friends for rectitude of ideas, for
steadiness of sentiment, absolute kindliness, and great loyalty; though
they had no esteem for his palette, they loved the man who held it.
"What a misfortune it is that Fougeres has the vice of painting!" said
his comrades.
But for all this, Grassou gave excellent counsel, like those
feuilletonists incapable of writing a book who know very well where a
book is wanting. There was this difference, however, between literary
critics and Fougeres; he was eminently sensitive to beauties; he felt
them, he acknowledged them, and his advice was instinct with a spirit
of justice that made the justness of his remarks acceptable. After
the revolution of July, Fougeres sent about ten pictures a year to the
Salon, of which the jury admitted four or five. He lived with the most
rigid economy, his household being managed solely by an old charwoman.
For all amusement he visited his friends, he went to see works of art,
he allowed himself a few little trips about France, and he planned to go
to Switzerland in search of inspiration. This detestable artist was an
excellent citizen; he mounted guard duly, went to reviews, and paid his
rent and provision-bills with bourgeois punctuality.
Having lived all his life in toil and poverty, he had never had the time
to love. Poor and a bachelor, until now he did not desire to complicate
his simple life. Incapable of devising any means of increasing his
little fortune, he carried, every three months, to his notary, Cardot,
his quarterly earnings and economies. When the notary had received
about three thousand francs he invested them in some first mortgage, the
interest of which he drew himself and added to the quarterly payments
made to him by Fougeres. The painter was awaiting the fortunate moment
when his property thus laid by would give him the imposing income of two
thousand francs, to allow himself the otium cum dignitate of the
artist and paint pictures; but oh! what pictures! true pictures! each a
finished picture! chouette, Koxnoff, chocnosoff! His future, his dreams
of happiness, the superlative of his hopes--do you know what it was?
To enter the Institute and obtain the grade of officer of the Legion
of honor; to side down beside Schinner and Leon de Lora, to reach the
Academy before Bridau, to wear a rosette in his buttonhole! What a
dream! It is only commonplace men who think of everything.
Hearing the sound of several steps on the staircase, Fougeres rubbed up
his hair, buttoned his jacket of bottle-green velveteen, and was not a
little amazed to see, entering his doorway, a simpleton face vulgarly
called in studio slang a "melon." This fruit surmounted a pumpkin,
clothed in blue cloth adorned with a bunch of tintinnabulating baubles.
The melon puffed like a walrus; the pumpkin advanced on turnips,
improperly called legs. A true painter would have turned the little
bottle-vendor off at once, assuring him that he didn't paint vegetables.
This painter looked at his client without a smile, for Monsieur Vervelle
wore a three-thousand-franc diamond in the bosom of his shirt.
Fougeres glanced at Magus and said: "There's fat in it!" using a slang
term then much in vogue in the studios.
Hearing those words Monsieur Vervelle frowned. The worthy bourgeois drew
after him another complication of vegetables in the persons of his wife
and daughter. The wife had a fine veneer of mahogany on her face, and
in figure she resembled a cocoa-nut, surmounted by a head and tied in
around the waist. She pivoted on her legs, which were tap-rooted,
and her gown was yellow with black stripes. She proudly exhibited
unutterable mittens on a puffy pair of hands; the plumes of a
first-class funeral floated on an over-flowing bonnet; laces adorned
her shoulders, as round behind as they were before; consequently, the
spherical form of the cocoa-nut was perfect. Her feet, of a kind that
painters call abatis, rose above the varnished leather of the shoes in a
swelling that was some inches high. How the feet were ever got into the
shoes, no one knows.
Following these vegetable parents was a young asparagus, who presented
a tiny head with smoothly banded hair of the yellow-carroty tone that a
Roman adores, long, stringy arms, a fairly white skin with reddish spots
upon it, large innocent eyes, and white lashes, scarcely any brows, a
leghorn bonnet bound with white satin and adorned with two honest bows
of the same satin, hands virtuously red, and the feet of her mother. The
faces of these three beings wore, as they looked round the studio, an
air of happiness which bespoke in them a respectable enthusiasm for Art.
"So it is you, monsieur, who are going to take our likenesses?" said the
father, assuming a jaunty air.
"Yes, monsieur," replied Grassou.
"Vervelle, he has the cross!" whispered the wife to the husband while
the painter's back was turned.
"Should I be likely to have our portraits painted by an artist who
wasn't decorated?" returned the former bottle-dealer.
Elie Magus here bowed to the Vervelle family and went away. Grassou
accompanied him to the landing.
"There's no one but you who would fish up such whales."
"One hundred thousand francs of 'dot'!"
"Yes, but what a family!"
"Three hundred thousand francs of expectations, a house in the rue
Boucherat, and a country-house at Ville d'Avray!"
"Bottles and corks! bottles and corks!" said the painter; "they set my
teeth on edge."
"Safe from want for the rest of your days," said Elie Magus as he
departed.
That idea entered the head of Pierre Grassou as the daylight had burst
into his garret that morning.
While he posed the father of the young person, he thought the
bottle-dealer had a good countenance, and he admired the face full
of violent tones. The mother and daughter hovered about the easel,
marvelling at all his preparations; they evidently thought him a
demigod. This visible admiration pleased Fougeres. The golden calf threw
upon the family its fantastic reflections.
"You must earn lots of money; but of course you don't spend it as you
get it," said the mother.
"No, madame," replied the painter; "I don't spend it; I have not the
means to amuse myself. My notary invests my money; he knows what I have;
as soon as I have taken him the money I never think of it again."
"I've always been told," cried old Vervelle, "that artists were baskets
with holes in them."
"Who is your notary--if it is not indiscreet to ask?" said Madame
Vervelle.
"A good fellow, all round," replied Grassou. "His name is Cardot."
"Well, well! if that isn't a joke!" exclaimed Vervelle. "Cardot is our
notary too."
"Take care! don't move," said the painter.
"Do pray hold still, Antenor," said the wife. "If you move about you'll
make monsieur miss; you should just see him working, and then you'd
understand."
"Oh! why didn't you have me taught the arts?" said Mademoiselle Vervelle
to her parents.
"Virginie," said her mother, "a young person ought not to learn certain
things. When you are married--well, till then, keep quiet."
During this first sitting the Vervelle family became almost intimate
with the worthy artist. They were to come again two days later. As they
went away the father told Virginie to walk in front; but in spite of
this separation, she overheard the following words, which naturally
awakened her curiosity.
"Decorated--thirty-seven years old--an artist who gets orders--puts his
money with our notary. We'll consult Cardot. Hein! Madame de Fougeres!
not a bad name--doesn't look like a bad man either! One might prefer a
merchant; but before a merchant retires from business one can never know
what one's daughter may come to; whereas an economical artist--and then
you know we love Art--Well, we'll see!"
While the Vervelle family discussed Pierre Grassou, Pierre Grassou
discussed in his own mind the Vervelle family. He found it impossible to
stay peacefully in his studio, so he took a walk on the boulevard, and
looked at all the red-haired women who passed him. He made a series of
the oddest reasonings to himself: gold was the handsomest of metals; a
tawny yellow represented gold; the Romans were fond of red-haired women,
and he turned Roman, etc. After two years of marriage what man would
ever care about the color of his wife's hair? Beauty fades,--but
ugliness remains! Money is one-half of all happiness. That night when he
went to bed the painter had come to think Virginie Vervelle charming.
When the three Vervelles arrived on the day of the second sitting the
artist received them with smiles. The rascal had shaved and put on clean
linen; he had also arranged his hair in a pleasing manner, and chosen
a very becoming pair of trousers and red leather slippers with pointed
toes. The family replied with smiles as flattering as those of the
artist. Virginie became the color of her hair, lowered her eyes, and
turned aside her head to look at the sketches. Pierre Grassou thought
these little affectations charming, Virginie had such grace; happily she
didn't look like her father or her mother; but whom did she look like?
During this sitting there were little skirmishes between the family
and the painter, who had the audacity to call pere Vervelle witty. This
flattery brought the family on the double-quick to the heart of the
artist; he gave a drawing to the daughter, and a sketch to the mother.
"What! for nothing?" they said.
Pierre Grassou could not help smiling.
"You shouldn't give away your pictures in that way; they are money,"
said old Vervelle.
At the third sitting pere Vervelle mentioned a fine gallery of pictures
which he had in his country-house at Ville d'Avray--Rubens, Gerard Douw,
Mieris, Terburg, Rembrandt, Titian, Paul Potter, etc.
"Monsieur Vervelle has been very extravagant," said Madame Vervelle,
ostentatiously. "He has over one hundred thousand francs' worth of
pictures."
"I love Art," said the former bottle-dealer.
When Madame Vervelle's portrait was begun that of her husband was nearly
finished, and the enthusiasm of the family knew no bounds. The notary
had spoken in the highest praise of the painter. Pierre Grassou was, he
said, one of the most honest fellows on earth; he had laid by thirty-six
thousand francs; his days of poverty were over; he now saved about ten
thousand francs a year and capitalized the interest; in short, he was
incapable of making a woman unhappy. This last remark had enormous
weight in the scales. Vervelle's friends now heard of nothing but the
celebrated painter Fougeres.
The day on which Fougeres began the portrait of Mademoiselle Virginie,
he was virtually son-in-law to the Vervelle family. The three Vervelles
bloomed out in this studio, which they were now accustomed to consider
as one of their residences; there was to them an inexplicable attraction
in this clean, neat, pretty, and artistic abode. Abyssus abyssum, the
commonplace attracts the commonplace. Toward the end of the sitting the
stairway shook, the door was violently thrust open by Joseph Bridau; he
came like a whirlwind, his hair flying. He showed his grand haggard face
as he looked about him, casting everywhere the lightning of his glance;
then he walked round the whole studio, and returned abruptly to Grassou,
pulling his coat together over the gastric region, and endeavouring, but
in vain, to button it, the button mould having escaped from its capsule
of cloth.
"Wood is dear," he said to Grassou.
"Ah!"
"The British are after me" (slang term for creditors) "Gracious! do you
paint such things as that?"
"Hold your tongue!"
"Ah! to be sure, yes."
The Vervelle family, extremely shocked by this extraordinary apparition,
passed from its ordinary red to a cherry-red, two shades deeper.
"Brings in, hey?" continued Joseph. "Any shot in your locker?"
"How much do you want?"
"Five hundred. I've got one of those bull-dog dealers after me, and if
the fellow once gets his teeth in he won't let go while there's a bit of
me left. What a crew!"
"I'll write you a line for my notary."
"Have you got a notary?"
"Yes."
"That explains to me why you still make cheeks with pink tones like a
perfumer's sign."
Grassou could not help coloring, for Virginie was sitting.
"Take Nature as you find her," said the great painter, going on with his
lecture. "Mademoiselle is red-haired. Well, is that a sin? All things
are magnificent in painting. Put some vermillion on your palette, and
warm up those cheeks; touch in those little brown spots; come, butter it
well in. Do you pretend to have more sense than Nature?"
"Look here," said Fougeres, "take my place while I go and write that
note."
Vervelle rolled to the table and whispered in Grassou's ear:--
"Won't that country lout spoilt it?"
"If he would only paint the portrait of your Virginie it would be worth
a thousand times more than mine," replied Fougeres, vehemently.
Hearing that reply the bourgeois beat a quiet retreat to his wife, who
was stupefied by the invasion of this ferocious animal, and very uneasy
at his co-operation in her daughter's portrait.
"Here, follow these indications," said Bridau, returning the palette,
and taking the note. "I won't thank you. I can go back now to d'Arthez'
chateau, where I am doing a dining-room, and Leon de Lora the tops of
the doors--masterpieces! Come and see us."
And off he went without taking leave, having had enough of looking at
Virginie.
"Who is that man?" asked Madame Vervelle.
"A great artist," answered Grassou.
There was silence for a moment.
"Are you quite sure," said Virginie, "that he has done no harm to my
portrait? He frightened me."
"He has only done it good," replied Grassou.
"Well, if he is a great artist, I prefer a great artist like you," said
Madame Vervelle.
The ways of genius had ruffled up these orderly bourgeois.
The phase of autumn so pleasantly named "Saint Martin's summer" was
just beginning. With the timidity of a neophyte in presence of a man of
genius, Vervelle risked giving Fougeres an invitation to come out to
his country-house on the following Sunday. He knew, he said, how little
attraction a plain bourgeois family could offer to an artist.
"You artists," he continued, "want emotions, great scenes, and witty
talk; but you'll find good wines, and I rely on my collection of
pictures to compensate an artist like you for the bore of dining with
mere merchants."
This form of idolatry, which stroked his innocent self-love, was
charming to our poor Pierre Grassou, so little accustomed to such
compliments. The honest artist, that atrocious mediocrity, that heart
of gold, that loyal soul, that stupid draughtsman, that worthy fellow,
decorated by royalty itself with the Legion of honor, put himself under
arms to go out to Ville d'Avray and enjoy the last fine days of the
year. The painter went modestly by public conveyance, and he could not
but admire the beautiful villa of the bottle-dealer, standing in a park
of five acres at the summit of Ville d'Avray, commanding a noble view
of the landscape. Marry Virginie, and have that beautiful villa some day
for his own!
He was received by the Vervelles with an enthusiasm, a joy, a
kindliness, a frank bourgeois absurdity which confounded him. It was
indeed a day of triumph. The prospective son-in-law was marched about
the grounds on the nankeen-colored paths, all raked as they should be
for the steps of so great a man. The trees themselves looked brushed and
combed, and the lawns had just been mown. The pure country air wafted
to the nostrils a most enticing smell of cooking. All things about the
mansion seemed to say:
"We have a great artist among us."
Little old Vervelle himself rolled like an apple through his park, the
daughter meandered like an eel, the mother followed with dignified step.
These three beings never let go for one moment of Pierre Grassou
for seven hours. After dinner, the length of which equalled its
magnificence, Monsieur and Madame Vervelle reached the moment of their
grand theatrical effect,--the opening of the picture gallery illuminated
by lamps, the reflections of which were managed with the utmost care.
Three neighbours, also retired merchants, an old uncle (from whom were
expectations), an elderly Demoiselle Vervelle, and a number of other
guests invited to be present at this ovation to a great artist followed
Grassou into the picture gallery, all curious to hear his opinion of the
famous collection of pere Vervelle, who was fond of oppressing them with
the fabulous value of his paintings. The bottle-merchant seemed to have
the idea of competing with King Louis-Philippe and the galleries of
Versailles.
The pictures, magnificently framed, each bore labels on which was read
in black letters on a gold ground:
Rubens
Dance of fauns and nymphs
Rembrandt
Interior of a dissecting room. The physician van Tromp
instructing his pupils.
In all, there were one hundred and fifty pictures, varnished and dusted.
Some were covered with green baize curtains which were not undrawn in
presence of young ladies.
Pierre Grassou stood with arms pendent, gaping mouth, and no word upon
his lips as he recognized half his own pictures in these works of art.
He was Rubens, he was Rembrandt, Mieris, Metzu, Paul Potter, Gerard
Douw! He was twenty great masters all by himself.
"What is the matter? You've turned pale!"
"Daughter, a glass of water! quick!" cried Madame Vervelle. The painter
took pere Vervelle by the button of his coat and led him to a corner on
pretence of looking at a Murillo. Spanish pictures were then the rage.
"You bought your pictures from Elie Magus?"
"Yes, all originals."
"Between ourselves, tell me what he made you pay for those I shall point
out to you."
Together they walked round the gallery. The guests were amazed at the
gravity in which the artist proceeded, in company with the host, to
examine each picture.
"Three thousand francs," said Vervelle in a whisper, as they reached the
last, "but I tell everybody forty thousand."
"Forty thousand for a Titian!" said the artist, aloud. "Why, it is
nothing at all!"
"Didn't I tell you," said Vervelle, "that I had three hundred thousand
francs' worth of pictures?"
"I painted those pictures," said Pierre Grassou in Vervelle's ear, "and
I sold them one by one to Elie Magus for less than ten thousand francs
the whole lot."
"Prove it to me," said the bottle-dealer, "and I double my daughter's
'dot,' for if it is so, you are Rubens, Rembrandt, Titian, Gerard Douw!"
"And Magus is a famous picture-dealer!" said the painter, who now saw
the meaning of the misty and aged look imparted to his pictures in
Elie's shop, and the utility of the subjects the picture-dealer had
required of him.
Far from losing the esteem of his admiring bottle-merchant, Monsieur
de Fougeres (for so the family persisted in calling Pierre Grassou)
advanced so much that when the portraits were finished he presented them
gratuitously to his father-in-law, his mother-in-law and his wife.
At the present day, Pierre Grassou, who never misses exhibiting at the
Salon, passes in bourgeois regions for a fine portrait-painter. He earns
some twenty thousand francs a year and spoils a thousand francs' worth
of canvas. His wife has six thousand francs a year in dowry, and he
lives with his father-in-law. The Vervelles and the Grassous, who agree
delightfully, keep a carriage, and are the happiest people on earth.
Pierre Grassou never emerges from the bourgeois circle, in which he
is considered one of the greatest artists of the period. Not a family
portrait is painted between the barrier du Trone and the rue du Temple
that is not done by this great painter; none of them costs less than
five hundred francs. The great reason which the bourgeois families have
for employing him is this:--
"Say what you will of him, he lays by twenty thousand francs a year with
his notary."
As Grassou took a creditable part on the occasion of the riots of May
12th he was appointed an officer of the Legion of honor. He is a major
in the National Guard. The Museum of Versailles felt it incumbent to
order a battle-piece of so excellent a citizen, who thereupon walked
about Paris to meet his old comrades and have the happiness of saying to
them:--
"The King has given me an order for the Museum of Versailles."
Madame de Fougeres adores her husband, to whom she has presented two
children. This painter, a good father and a good husband, is unable to
eradicate from his heart a fatal thought, namely, that artists laugh at
his work; that his name is a term of contempt in the studios; and that
the feuilletons take no notice of his pictures. But he still works on;
he aims for the Academy, where, undoubtedly, he will enter. And--oh!
vengeance which dilates his heart!--he buys the pictures of celebrated
artists who are pinched for means, and he substitutes these true works
of arts that are not his own for the wretched daubs in the collection at
Ville d'Avray.
There are many mediocrities more aggressive and more mischievous than
that of Pierre Grassou, who is, moreover, anonymously benevolent and
truly obliging.
ADDENDUM
The following personages appear in other stories of the Human Comedy.
Bridau, Joseph
The Purse
A Bachelor's Establishment
A Distinguished Provincial at Paris
A Start in Life
Modeste Mignon
Another Study of Woman
Letters of Two Brides
Cousin Betty
The Member for Arcis
Cardot (Parisian notary)
The Muse of the Department
A Man of Business
Jealousies of a Country Town
The Middle Classes
Cousin Pons
Grassou, Pierre
A Bachelor's Establishment
Cousin Betty
The Middle Classes
Cousin Pons
Lora, Leon de
The Unconscious Humorists
A Bachelor's Establishment
A Start in Life
Honorine
Cousin Betty
Beatrix
Magus, Elie
The Vendetta
A Marriage Settlement
A Bachelor's Establishment
Cousin Pons
Schinner, Hippolyte
The Purse
A Bachelor's Establishment
A Start in Life
Albert Savarus
The Government Clerks
Modeste Mignon
The Imaginary Mistress
The Unconscious Humorists
End of the Project Gutenberg EBook of Pierre Grassou, by Honore de Balzac
|
Where do Mr. and Mrs. Vervelle live?
|
In a mansion in Ville-d'Avray.
| 7,898
|
narrativeqa
|
8k
|
Produced by John Bickers, and Dagny
LA GRANDE BRETECHE
(Sequel to "Another Study of Woman.")
By Honore De Balzac
Translated by Ellen Marriage and Clara Bell
LA GRANDE BRETECHE
"Ah! madame," replied the doctor, "I have some appalling stories in my
collection. But each one has its proper hour in a conversation--you know
the pretty jest recorded by Chamfort, and said to the Duc de Fronsac:
'Between your sally and the present moment lie ten bottles of
champagne.'"
"But it is two in the morning, and the story of Rosina has prepared us,"
said the mistress of the house.
"Tell us, Monsieur Bianchon!" was the cry on every side.
The obliging doctor bowed, and silence reigned.
"At about a hundred paces from Vendome, on the banks of the Loir," said
he, "stands an old brown house, crowned with very high roofs, and so
completely isolated that there is nothing near it, not even a fetid
tannery or a squalid tavern, such as are commonly seen outside small
towns. In front of this house is a garden down to the river, where the
box shrubs, formerly clipped close to edge the walks, now straggle
at their own will. A few willows, rooted in the stream, have grown
up quickly like an enclosing fence, and half hide the house. The
wild plants we call weeds have clothed the bank with their beautiful
luxuriance. The fruit-trees, neglected for these ten years past,
no longer bear a crop, and their suckers have formed a thicket. The
espaliers are like a copse. The paths, once graveled, are overgrown with
purslane; but, to be accurate there is no trace of a path.
"Looking down from the hilltop, to which cling the ruins of the old
castle of the Dukes of Vendome, the only spot whence the eye can
see into this enclosure, we think that at a time, difficult now to
determine, this spot of earth must have been the joy of some country
gentleman devoted to roses and tulips, in a word, to horticulture, but
above all a lover of choice fruit. An arbor is visible, or rather
the wreck of an arbor, and under it a table still stands not entirely
destroyed by time. At the aspect of this garden that is no more, the
negative joys of the peaceful life of the provinces may be divined as we
divine the history of a worthy tradesman when we read the epitaph on his
tomb. To complete the mournful and tender impressions which seize the
soul, on one of the walls there is a sundial graced with this homely
Christian motto, '_Ultimam cogita_.'
"The roof of this house is dreadfully dilapidated; the outside shutters
are always closed; the balconies are hung with swallows' nests; the
doors are for ever shut. Straggling grasses have outlined the flagstones
of the steps with green; the ironwork is rusty. Moon and sun, winter,
summer, and snow have eaten into the wood, warped the boards, peeled
off the paint. The dreary silence is broken only by birds and cats,
polecats, rats, and mice, free to scamper round, and fight, and eat each
other. An invisible hand has written over it all: 'Mystery.'
"If, prompted by curiosity, you go to look at this house from the
street, you will see a large gate, with a round-arched top; the children
have made many holes in it. I learned later that this door had been
blocked for ten years. Through these irregular breaches you will see
that the side towards the courtyard is in perfect harmony with the side
towards the garden. The same ruin prevails. Tufts of weeds outline
the paving-stones; the walls are scored by enormous cracks, and the
blackened coping is laced with a thousand festoons of pellitory. The
stone steps are disjointed; the bell-cord is rotten; the gutter-spouts
broken. What fire from heaven could have fallen there? By what decree
has salt been sown on this dwelling? Has God been mocked here? Or was
France betrayed? These are the questions we ask ourselves. Reptiles
crawl over it, but give no reply. This empty and deserted house is a
vast enigma of which the answer is known to none.
"It was formerly a little domain, held in fief, and is known as La
Grande Breteche. During my stay at Vendome, where Despleins had left me
in charge of a rich patient, the sight of this strange dwelling became
one of my keenest pleasures. Was it not far better than a ruin? Certain
memories of indisputable authenticity attach themselves to a ruin; but
this house, still standing, though being slowly destroyed by an avenging
hand, contained a secret, an unrevealed thought. At the very least,
it testified to a caprice. More than once in the evening I boarded the
hedge, run wild, which surrounded the enclosure. I braved scratches, I
got into this ownerless garden, this plot which was no longer public or
private; I lingered there for hours gazing at the disorder. I would not,
as the price of the story to which this strange scene no doubt was due,
have asked a single question of any gossiping native. On that spot I
wove delightful romances, and abandoned myself to little debauches of
melancholy which enchanted me. If I had known the reason--perhaps quite
commonplace--of this neglect, I should have lost the unwritten poetry
which intoxicated me. To me this refuge represented the most various
phases of human life, shadowed by misfortune; sometimes the peace of the
graveyard without the dead, who speak in the language of epitaphs; one
day I saw in it the home of lepers; another, the house of the Atridae;
but, above all, I found there provincial life, with its contemplative
ideas, its hour-glass existence. I often wept there, I never laughed.
"More than once I felt involuntary terrors as I heard overhead the dull
hum of the wings of some hurrying wood-pigeon. The earth is dank; you
must be on the watch for lizards, vipers, and frogs, wandering about
with the wild freedom of nature; above all, you must have no fear
of cold, for in a few moments you feel an icy cloak settle on your
shoulders, like the Commendatore's hand on Don Giovanni's neck.
"One evening I felt a shudder; the wind had turned an old rusty
weathercock, and the creaking sounded like a cry from the house, at
the very moment when I was finishing a gloomy drama to account for
this monumental embodiment of woe. I returned to my inn, lost in gloomy
thoughts. When I had supped, the hostess came into my room with an air
of mystery, and said, 'Monsieur, here is Monsieur Regnault.'
"'Who is Monsieur Regnault?'
"'What, sir, do you not know Monsieur Regnault?--Well, that's odd,' said
she, leaving the room.
"On a sudden I saw a man appear, tall, slim, dressed in black, hat
in hand, who came in like a ram ready to butt his opponent, showing a
receding forehead, a small pointed head, and a colorless face of the hue
of a glass of dirty water. You would have taken him for an usher. The
stranger wore an old coat, much worn at the seams; but he had a diamond
in his shirt frill, and gold rings in his ears.
"'Monsieur,' said I, 'whom have I the honor of addressing?'--He took a
chair, placed himself in front of my fire, put his hat on my table,
and answered while he rubbed his hands: 'Dear me, it is very
cold.--Monsieur, I am Monsieur Regnault.'
"I was encouraging myself by saying to myself, '_Il bondo cani!_ Seek!'
"'I am,' he went on, 'notary at Vendome.'
"'I am delighted to hear it, monsieur,' I exclaimed. 'But I am not in a
position to make a will for reasons best known to myself.'
"'One moment!' said he, holding up his hand as though to gain silence.
'Allow me, monsieur, allow me! I am informed that you sometimes go to
walk in the garden of la Grande Breteche.'
"'Yes, monsieur.'
"'One moment!' said he, repeating his gesture. 'That constitutes a
misdemeanor. Monsieur, as executor under the will of the late Comtesse
de Merret, I come in her name to beg you to discontinue the practice.
One moment! I am not a Turk, and do not wish to make a crime of it. And
besides, you are free to be ignorant of the circumstances which
compel me to leave the finest mansion in Vendome to fall into ruin.
Nevertheless, monsieur, you must be a man of education, and you should
know that the laws forbid, under heavy penalties, any trespass on
enclosed property. A hedge is the same as a wall. But, the state in
which the place is left may be an excuse for your curiosity. For my
part, I should be quite content to make you free to come and go in the
house; but being bound to respect the will of the testatrix, I have
the honor, monsieur, to beg that you will go into the garden no more.
I myself, monsieur, since the will was read, have never set foot in the
house, which, as I had the honor of informing you, is part of the estate
of the late Madame de Merret. We have done nothing there but verify the
number of doors and windows to assess the taxes I have to pay annually
out of the funds left for that purpose by the late Madame de Merret. Ah!
my dear sir, her will made a great commotion in the town.'
"The good man paused to blow his nose. I respected his volubility,
perfectly understanding that the administration of Madame de Merret's
estate had been the most important event of his life, his reputation,
his glory, his Restoration. As I was forced to bid farewell to my
beautiful reveries and romances, I was to reject learning the truth on
official authority.
"'Monsieur,' said I, 'would it be indiscreet if I were to ask you the
reasons for such eccentricity?'
"At these words an expression, which revealed all the pleasure which
men feel who are accustomed to ride a hobby, overspread the lawyer's
countenance. He pulled up the collar of his shirt with an air, took out
his snuffbox, opened it, and offered me a pinch; on my refusing, he took
a large one. He was happy! A man who has no hobby does not know all
the good to be got out of life. A hobby is the happy medium between a
passion and a monomania. At this moment I understood the whole bearing
of Sterne's charming passion, and had a perfect idea of the delight with
which my uncle Toby, encouraged by Trim, bestrode his hobby-horse.
"'Monsieur,' said Monsieur Regnault, 'I was head-clerk in Monsieur
Roguin's office, in Paris. A first-rate house, which you may have heard
mentioned? No! An unfortunate bankruptcy made it famous.--Not having
money enough to purchase a practice in Paris at the price to which they
were run up in 1816, I came here and bought my predecessor's business.
I had relations in Vendome; among others, a wealthy aunt, who allowed
me to marry her daughter.--Monsieur,' he went on after a little pause,
'three months after being licensed by the Keeper of the Seals, one
evening, as I was going to bed--it was before my marriage--I was sent
for by Madame la Comtesse de Merret, to her Chateau of Merret. Her maid,
a good girl, who is now a servant in this inn, was waiting at my door
with the Countess' own carriage. Ah! one moment! I ought to tell you
that Monsieur le Comte de Merret had gone to Paris to die two months
before I came here. He came to a miserable end, flinging himself into
every kind of dissipation. You understand?
"'On the day when he left, Madame la Comtesse had quitted la Grand
Breteche, having dismantled it. Some people even say that she had
burnt all the furniture, the hangings--in short, all the chattels and
furniture whatever used in furnishing the premises now let by the
said M.--(Dear, what am I saying? I beg your pardon, I thought I was
dictating a lease.)--In short, that she burnt everything in the meadow
at Merret. Have you been to Merret, monsieur?--No,' said he, answering
himself, 'Ah, it is a very fine place.'
"'For about three months previously,' he went on, with a jerk of his
head, 'the Count and Countess had lived in a very eccentric way; they
admitted no visitors; Madame lived on the ground-floor, and Monsieur on
the first floor. When the Countess was left alone, she was never seen
excepting at church. Subsequently, at home, at the chateau, she refused
to see the friends, whether gentlemen or ladies, who went to call on
her. She was already very much altered when she left la Grande Breteche
to go to Merret. That dear lady--I say dear lady, for it was she who
gave me this diamond, but indeed I saw her but once--that kind lady was
very ill; she had, no doubt, given up all hope, for she died without
choosing to send for a doctor; indeed, many of our ladies fancied she
was not quite right in her head. Well, sir, my curiosity was strangely
excited by hearing that Madame de Merret had need of my services. Nor
was I the only person who took an interest in the affair. That very
night, though it was already late, all the town knew that I was going to
Merret.
"'The waiting-woman replied but vaguely to the questions I asked her on
the way; nevertheless, she told me that her mistress had received the
Sacrament in the course of the day at the hands of the Cure of Merret,
and seemed unlikely to live through the night. It was about eleven when
I reached the chateau. I went up the great staircase. After crossing
some large, lofty, dark rooms, diabolically cold and damp, I reached the
state bedroom where the Countess lay. From the rumors that were current
concerning this lady (monsieur, I should never end if I were to repeat
all the tales that were told about her), I had imagined her a coquette.
Imagine, then, that I had great difficulty in seeing her in the great
bed where she was lying. To be sure, to light this enormous room, with
old-fashioned heavy cornices, and so thick with dust that merely to see
it was enough to make you sneeze, she had only an old Argand lamp. Ah!
but you have not been to Merret. Well, the bed is one of those old world
beds, with a high tester hung with flowered chintz. A small table stood
by the bed, on which I saw an "Imitation of Christ," which, by the
way, I bought for my wife, as well as the lamp. There were also a deep
armchair for her confidential maid, and two small chairs. There was no
fire. That was all the furniture, not enough to fill ten lines in an
inventory.
"'My dear sir, if you had seen, as I then saw, that vast room, papered
and hung with brown, you would have felt yourself transported into a
scene of a romance. It was icy, nay more, funereal,' and he lifted his
hand with a theatrical gesture and paused.
"'By dint of seeking, as I approached the bed, at last I saw Madame de
Merret, under the glimmer of the lamp, which fell on the pillows.
Her face was as yellow as wax, and as narrow as two folded hands. The
Countess had a lace cap showing her abundant hair, but as white as linen
thread. She was sitting up in bed, and seemed to keep upright with
great difficulty. Her large black eyes, dimmed by fever, no doubt,
and half-dead already, hardly moved under the bony arch of her
eyebrows.--There,' he added, pointing to his own brow. 'Her forehead was
clammy; her fleshless hands were like bones covered with soft skin;
the veins and muscles were perfectly visible. She must have been very
handsome; but at this moment I was startled into an indescribable
emotion at the sight. Never, said those who wrapped her in her shroud,
had any living creature been so emaciated and lived. In short, it was
awful to behold! Sickness so consumed that woman, that she was no more
than a phantom. Her lips, which were pale violet, seemed to me not to
move when she spoke to me.
"'Though my profession has familiarized me with such spectacles, by
calling me not infrequently to the bedside of the dying to record their
last wishes, I confess that families in tears and the agonies I have
seen were as nothing in comparison with this lonely and silent woman in
her vast chateau. I heard not the least sound, I did not perceive the
movement which the sufferer's breathing ought to have given to the
sheets that covered her, and I stood motionless, absorbed in looking at
her in a sort of stupor. In fancy I am there still. At last her large
eyes moved; she tried to raise her right hand, but it fell back on the
bed, and she uttered these words, which came like a breath, for her
voice was no longer a voice: "I have waited for you with the greatest
impatience." A bright flush rose to her cheeks. It was a great effort to
her to speak.
"'"Madame," I began. She signed to me to be silent. At that moment
the old housekeeper rose and said in my ear, "Do not speak; Madame la
Comtesse is not in a state to bear the slightest noise, and what you say
might agitate her."
"'I sat down. A few instants after, Madame de Merret collected all her
remaining strength to move her right hand, and slipped it, not without
infinite difficulty, under the bolster; she then paused a moment. With
a last effort she withdrew her hand; and when she brought out a sealed
paper, drops of perspiration rolled from her brow. "I place my will in
your hands--Oh! God! Oh!" and that was all. She clutched a crucifix that
lay on the bed, lifted it hastily to her lips, and died.
"'The expression of her eyes still makes me shudder as I think of it.
She must have suffered much! There was joy in her last glance, and it
remained stamped on her dead eyes.
"'I brought away the will, and when it was opened I found that Madame de
Merret had appointed me her executor. She left the whole of her property
to the hospital at Vendome excepting a few legacies. But these were her
instructions as relating to la Grande Breteche: She ordered me to leave
the place, for fifty years counting from the day of her death, in the
state in which it might be at the time of her death, forbidding any one,
whoever he might be, to enter the apartments, prohibiting any repairs
whatever, and even settling a salary to pay watchmen if it were needful
to secure the absolute fulfilment of her intentions. At the expiration
of that term, if the will of the testatrix has been duly carried out,
the house is to become the property of my heirs, for, as you know, a
notary cannot take a bequest. Otherwise la Grande Breteche reverts to
the heirs-at-law, but on condition of fulfilling certain conditions
set forth in a codicil to the will, which is not to be opened till
the expiration of the said term of fifty years. The will has not been
disputed, so----' And without finishing his sentence, the lanky notary
looked at me with an air of triumph; I made him quite happy by offering
him my congratulations.
"'Monsieur,' I said in conclusion, 'you have so vividly impressed
me that I fancy I see the dying woman whiter than her sheets; her
glittering eyes frighten me; I shall dream of her to-night.--But you
must have formed some idea as to the instructions contained in that
extraordinary will.'
"'Monsieur,' said he, with comical reticence, 'I never allow myself
to criticise the conduct of a person who honors me with the gift of a
diamond.'
"However, I soon loosened the tongue of the discreet notary of Vendome,
who communicated to me, not without long digressions, the opinions of
the deep politicians of both sexes whose judgments are law in Vendome.
But these opinions were so contradictory, so diffuse, that I was
near falling asleep in spite of the interest I felt in this authentic
history. The notary's ponderous voice and monotonous accent, accustomed
no doubt to listen to himself and to make himself listened to by his
clients or fellow-townsmen, were too much for my curiosity. Happily, he
soon went away.
"'Ah, ha, monsieur,' said he on the stairs, 'a good many persons would
be glad to live five-and-forty years longer; but--one moment!' and he
laid the first finger of his right hand to his nostril with a cunning
look, as much as to say, 'Mark my words!--To last as long as that--as
long as that,' said he, 'you must not be past sixty now.'
"I closed my door, having been roused from my apathy by this last
speech, which the notary thought very funny; then I sat down in my
armchair, with my feet on the fire-dogs. I had lost myself in a romance
_a la_ Radcliffe, constructed on the juridical base given me by Monsieur
Regnault, when the door, opened by a woman's cautious hand, turned on
the hinges. I saw my landlady come in, a buxom, florid dame, always
good-humored, who had missed her calling in life. She was a Fleming, who
ought to have seen the light in a picture by Teniers.
"'Well, monsieur,' said she, 'Monsieur Regnault has no doubt been giving
you his history of la Grande Breteche?'
"'Yes, Madame Lepas.'
"'And what did he tell you?'
"I repeated in a few words the creepy and sinister story of Madame de
Merret. At each sentence my hostess put her head forward, looking at
me with an innkeeper's keen scrutiny, a happy compromise between the
instinct of a police constable, the astuteness of a spy, and the cunning
of a dealer.
"'My good Madame Lepas,' said I as I ended, 'you seem to know more about
it. Heh? If not, why have you come up to me?'
"'On my word, as an honest woman----'
"'Do not swear; your eyes are big with a secret. You knew Monsieur de
Merret; what sort of man was he?'
"'Monsieur de Merret--well, you see he was a man you never could see
the top of, he was so tall! A very good gentleman, from Picardy, and who
had, as we say, his head close to his cap. He paid for everything down,
so as never to have difficulties with any one. He was hot-tempered, you
see! All our ladies liked him very much.'
"'Because he was hot-tempered?' I asked her.
"'Well, may be,' said she; 'and you may suppose, sir, that a man had to
have something to show for a figurehead before he could marry Madame de
Merret, who, without any reflection on others, was the handsomest and
richest heiress in our parts. She had about twenty thousand francs
a year. All the town was at the wedding; the bride was pretty and
sweet-looking, quite a gem of a woman. Oh, they were a handsome couple
in their day!'
"'And were they happy together?'
"'Hm, hm! so-so--so far as can be guessed, for, as you may suppose, we
of the common sort were not hail-fellow-well-met with them.--Madame de
Merret was a kind woman and very pleasant, who had no doubt sometimes to
put up with her husband's tantrums. But though he was rather haughty, we
were fond of him. After all, it was his place to behave so. When a man
is a born nobleman, you see----'
"'Still, there must have been some catastrophe for Monsieur and Madame
de Merret to part so violently?'
"'I did not say there was any catastrophe, sir. I know nothing about
it.'
"'Indeed. Well, now, I am sure you know everything.'
"'Well, sir, I will tell you the whole story.--When I saw Monsieur
Regnault go up to see you, it struck me that he would speak to you about
Madame de Merret as having to do with la Grande Breteche. That put it
into my head to ask your advice, sir, seeming to me that you are a
man of good judgment and incapable of playing a poor woman like me
false--for I never did any one a wrong, and yet I am tormented by my
conscience. Up to now I have never dared to say a word to the people of
these parts; they are all chatter-mags, with tongues like knives. And
never till now, sir, have I had any traveler here who stayed so long in
the inn as you have, and to whom I could tell the history of the fifteen
thousand francs----'
"'My dear Madame Lepas, if there is anything in your story of a nature
to compromise me,' I said, interrupting the flow of her words, 'I would
not hear it for all the world.'
"'You need have no fears,' said she; 'you will see.'
"Her eagerness made me suspect that I was not the only person to whom
my worthy landlady had communicated the secret of which I was to be the
sole possessor, but I listened.
"'Monsieur,' said she, 'when the Emperor sent the Spaniards here,
prisoners of war and others, I was required to lodge at the charge
of the Government a young Spaniard sent to Vendome on parole.
Notwithstanding his parole, he had to show himself every day to the
sub-prefect. He was a Spanish grandee--neither more nor less. He had
a name in _os_ and _dia_, something like Bagos de Feredia. I wrote his
name down in my books, and you may see it if you like. Ah! he was a
handsome young fellow for a Spaniard, who are all ugly they say. He was
not more than five feet two or three in height, but so well made; and he
had little hands that he kept so beautifully! Ah! you should have
seen them. He had as many brushes for his hands as a woman has for her
toilet. He had thick, black hair, a flame in his eye, a somewhat coppery
complexion, but which I admired all the same. He wore the finest linen
I have ever seen, though I have had princesses to lodge here, and, among
others, General Bertrand, the Duc and Duchesse d'Abrantes, Monsieur
Descazes, and the King of Spain. He did not eat much, but he had such
polite and amiable ways that it was impossible to owe him a grudge for
that. Oh! I was very fond of him, though he did not say four words to me
in a day, and it was impossible to have the least bit of talk with him;
if he was spoken to, he did not answer; it is a way, a mania they all
have, it would seem.
"'He read his breviary like a priest, and went to mass and all the
services quite regularly. And where did he post himself?--we found this
out later.--Within two yards of Madame de Merret's chapel. As he took
that place the very first time he entered the church, no one imagined
that there was any purpose in it. Besides, he never raised his nose
above his book, poor young man! And then, monsieur, of an evening he
went for a walk on the hill among the ruins of the old castle. It was
his only amusement, poor man; it reminded him of his native land. They
say that Spain is all hills!
"'One evening, a few days after he was sent here, he was out very late.
I was rather uneasy when he did not come in till just on the stroke of
midnight; but we all got used to his whims; he took the key of the door,
and we never sat up for him. He lived in a house belonging to us in the
Rue des Casernes. Well, then, one of our stable-boys told us one evening
that, going down to wash the horses in the river, he fancied he had seen
the Spanish Grandee swimming some little way off, just like a fish. When
he came in, I told him to be careful of the weeds, and he seemed put out
at having been seen in the water.
"'At last, monsieur, one day, or rather one morning, we did not find
him in his room; he had not come back. By hunting through his things, I
found a written paper in the drawer of his table, with fifty pieces of
Spanish gold of the kind they call doubloons, worth about five thousand
francs; and in a little sealed box ten thousand francs worth of
diamonds. The paper said that in case he should not return, he left us
this money and these diamonds in trust to found masses to thank God for
his escape and for his salvation.
"'At that time I still had my husband, who ran off in search of him.
And this is the queer part of the story: he brought back the Spaniard's
clothes, which he had found under a big stone on a sort of breakwater
along the river bank, nearly opposite la Grande Breteche. My husband
went so early that no one saw him. After reading the letter, he burnt
the clothes, and, in obedience to Count Feredia's wish, we announced
that he had escaped.
"'The sub-prefect set all the constabulary at his heels; but, pshaw! he
was never caught. Lepas believed that the Spaniard had drowned himself.
I, sir, have never thought so; I believe, on the contrary, that he had
something to do with the business about Madame de Merret, seeing that
Rosalie told me that the crucifix her mistress was so fond of that she
had it buried with her, was made of ebony and silver; now in the early
days of his stay here, Monsieur Feredia had one of ebony and silver
which I never saw later.--And now, monsieur, do not you say that I need
have no remorse about the Spaniard's fifteen thousand francs? Are they
not really and truly mine?'
"'Certainly.--But have you never tried to question Rosalie?' said I.
"'Oh, to be sure I have, sir. But what is to be done? That girl is like
a wall. She knows something, but it is impossible to make her talk.'
"After chatting with me for a few minutes, my hostess left me a prey
to vague and sinister thoughts, to romantic curiosity, and a religious
dread, not unlike the deep emotion which comes upon us when we go into a
dark church at night and discern a feeble light glimmering under a lofty
vault--a dim figure glides across--the sweep of a gown or of a priest's
cassock is audible--and we shiver! La Grande Breteche, with its rank
grasses, its shuttered windows, its rusty iron-work, its locked doors,
its deserted rooms, suddenly rose before me in fantastic vividness. I
tried to get into the mysterious dwelling to search out the heart of
this solemn story, this drama which had killed three persons.
"Rosalie became in my eyes the most interesting being in Vendome. As
I studied her, I detected signs of an inmost thought, in spite of the
blooming health that glowed in her dimpled face. There was in her soul
some element of ruth or of hope; her manner suggested a secret, like
the expression of devout souls who pray in excess, or of a girl who has
killed her child and for ever hears its last cry. Nevertheless, she was
simple and clumsy in her ways; her vacant smile had nothing criminal
in it, and you would have pronounced her innocent only from seeing the
large red and blue checked kerchief that covered her stalwart bust,
tucked into the tight-laced bodice of a lilac- and white-striped gown.
'No,' said I to myself, 'I will not quit Vendome without knowing the
whole history of la Grande Breteche. To achieve this end, I will make
love to Rosalie if it proves necessary.'
"'Rosalie!' said I one evening.
"'Your servant, sir?'
"'You are not married?' She started a little.
"'Oh! there is no lack of men if ever I take a fancy to be miserable!'
she replied, laughing. She got over her agitation at once; for every
woman, from the highest lady to the inn-servant inclusive, has a native
presence of mind.
"'Yes; you are fresh and good-looking enough never to lack lovers! But
tell me, Rosalie, why did you become an inn-servant on leaving Madame de
Merret? Did she not leave you some little annuity?'
"'Oh yes, sir. But my place here is the best in all the town of
Vendome.'
"This reply was such an one as judges and attorneys call evasive.
Rosalie, as it seemed to me, held in this romantic affair the place of
the middle square of the chess-board: she was at the very centre of the
interest and of the truth; she appeared to me to be tied into the knot
of it. It was not a case for ordinary love-making; this girl contained
the last chapter of a romance, and from that moment all my attentions
were devoted to Rosalie. By dint of studying the girl, I observed in
her, as in every woman whom we make our ruling thought, a variety of
good qualities; she was clean and neat; she was handsome, I need not
say; she soon was possessed of every charm that desire can lend to a
woman in whatever rank of life. A fortnight after the notary's visit,
one evening, or rather one morning, in the small hours, I said to
Rosalie:
"'Come, tell me all you know about Madame de Merret.'
"'Oh!' she said, 'I will tell you; but keep the secret carefully.'
"'All right, my child; I will keep all your secrets with a thief's
honor, which is the most loyal known.'
"'If it is all the same to you,' said she, 'I would rather it should be
with your own.'
"Thereupon she set her head-kerchief straight, and settled herself to
tell the tale; for there is no doubt a particular attitude of confidence
and security is necessary to the telling of a narrative. The best tales
are told at a certain hour--just as we are all here at table. No one
ever told a story well standing up, or fasting.
"If I were to reproduce exactly Rosalie's diffuse eloquence, a whole
volume would scarcely contain it. Now, as the event of which she gave me
a confused account stands exactly midway between the notary's gossip and
that of Madame Lepas, as precisely as the middle term of a rule-of-three
sum stands between the first and third, I have only to relate it in as
few words as may be. I shall therefore be brief.
"The room at la Grande Breteche in which Madame de Merret slept was on
the ground floor; a little cupboard in the wall, about four feet deep,
served her to hang her dresses in. Three months before the evening of
which I have to relate the events, Madame de Merret had been seriously
ailing, so much so that her husband had left her to herself, and had his
own bedroom on the first floor. By one of those accidents which it is
impossible to foresee, he came in that evening two hours later than
usual from the club, where he went to read the papers and talk politics
with the residents in the neighborhood. His wife supposed him to have
come in, to be in bed and asleep. But the invasion of France had been
the subject of a very animated discussion; the game of billiards had
waxed vehement; he had lost forty francs, an enormous sum at Vendome,
where everybody is thrifty, and where social habits are restrained
within the bounds of a simplicity worthy of all praise, and the
foundation perhaps of a form of true happiness which no Parisian would
care for.
"For some time past Monsieur de Merret had been satisfied to ask Rosalie
whether his wife was in bed; on the girl's replying always in the
affirmative, he at once went to his own room, with the good faith that
comes of habit and confidence. But this evening, on coming in, he took
it into his head to go to see Madame de Merret, to tell her of his
ill-luck, and perhaps to find consolation. During dinner he had observed
that his wife was very becomingly dressed; he reflected as he came
home from the club that his wife was certainly much better, that
convalescence had improved her beauty, discovering it, as husbands
discover everything, a little too late. Instead of calling Rosalie,
who was in the kitchen at the moment watching the cook and the coachman
playing a puzzling hand at cards, Monsieur de Merret made his way to his
wife's room by the light of his lantern, which he set down at the lowest
step of the stairs. His step, easy to recognize, rang under the vaulted
passage.
"At the instant when the gentleman turned the key to enter his wife's
room, he fancied he heard the door shut of the closet of which I have
spoken; but when he went in, Madame de Merret was alone, standing in
front of the fireplace. The unsuspecting husband fancied that Rosalie
was in the cupboard; nevertheless, a doubt, ringing in his ears like a
peal of bells, put him on his guard; he looked at his wife, and read in
her eyes an indescribably anxious and haunted expression.
"'You are very late,' said she.--Her voice, usually so clear and sweet,
struck him as being slightly husky.
"Monsieur de Merret made no reply, for at this moment Rosalie came in.
This was like a thunder-clap. He walked up and down the room, going from
one window to another at a regular pace, his arms folded.
"'Have you had bad news, or are you ill?' his wife asked him timidly,
while Rosalie helped her to undress. He made no reply.
"'You can go, Rosalie,' said Madame de Merret to her maid; 'I can put in
my curl-papers myself.'--She scented disaster at the mere aspect of her
husband's face, and wished to be alone with him. As soon as Rosalie
was gone, or supposed to be gone, for she lingered a few minutes in the
passage, Monsieur de Merret came and stood facing his wife, and said
coldly, 'Madame, there is some one in your cupboard!' She looked at her
husband calmly, and replied quite simply, 'No, monsieur.'
"This 'No' wrung Monsieur de Merret's heart; he did not believe it; and
yet his wife had never appeared purer or more saintly than she seemed
to be at this moment. He rose to go and open the closet door. Madame de
Merret took his hand, stopped him, looked at him sadly, and said in a
voice of strange emotion, 'Remember, if you should find no one there,
everything must be at an end between you and me.'
"The extraordinary dignity of his wife's attitude filled him with deep
esteem for her, and inspired him with one of those resolves which need
only a grander stage to become immortal.
"'No, Josephine,' he said, 'I will not open it. In either event we
should be parted for ever. Listen; I know all the purity of your soul, I
know you lead a saintly life, and would not commit a deadly sin to save
your life.'--At these words Madame de Merret looked at her husband with
a haggard stare.--'See, here is your crucifix,' he went on. 'Swear to
me before God that there is no one in there; I will believe you--I will
never open that door.'
"Madame de Merret took up the crucifix and said, 'I swear it.'
"'Louder,' said her husband; 'and repeat: "I swear before God that there
is nobody in that closet."' She repeated the words without flinching.
"'That will do,' said Monsieur de Merret coldly. After a moment's
silence: 'You have there a fine piece of work which I never saw before,'
said he, examining the crucifix of ebony and silver, very artistically
wrought.
"'I found it at Duvivier's; last year when that troop of Spanish
prisoners came through Vendome, he bought it of a Spanish monk.'
"'Indeed,' said Monsieur de Merret, hanging the crucifix on its nail;
and he rang the bell.
"He had to wait for Rosalie. Monsieur de Merret went forward quickly
to meet her, led her into the bay of the window that looked on to the
garden, and said to her in an undertone:
"'I know that Gorenflot wants to marry you, that poverty alone prevents
your setting up house, and that you told him you would not be his wife
till he found means to become a master mason.--Well, go and fetch him;
tell him to come here with his trowel and tools. Contrive to wake no one
in his house but himself. His reward will be beyond your wishes. Above
all, go out without saying a word--or else!' and he frowned.
"Rosalie was going, and he called her back. 'Here, take my latch-key,'
said he.
"'Jean!' Monsieur de Merret called in a voice of thunder down the
passage. Jean, who was both coachman and confidential servant, left his
cards and came.
"'Go to bed, all of you,' said his master, beckoning him to come close;
and the gentleman added in a whisper, 'When they are all asleep--mind,
_asleep_--you understand?--come down and tell me.'
"Monsieur de Merret, who had never lost sight of his wife while giving
his orders, quietly came back to her at the fireside, and began to tell
her the details of the game of billiards and the discussion at the club.
When Rosalie returned she found Monsieur and Madame de Merret conversing
amiably.
"Not long before this Monsieur de Merret had had new ceilings made to
all the reception-rooms on the ground floor. Plaster is very scarce at
Vendome; the price is enhanced by the cost of carriage; the gentleman
had therefore had a considerable quantity delivered to him, knowing
that he could always find purchasers for what might be left. It was this
circumstance which suggested the plan he carried out.
"'Gorenflot is here, sir,' said Rosalie in a whisper.
"'Tell him to come in,' said her master aloud.
"Madame de Merret turned paler when she saw the mason.
"'Gorenflot,' said her husband, 'go and fetch some bricks from the
coach-house; bring enough to wall up the door of this cupboard; you can
use the plaster that is left for cement.' Then, dragging Rosalie and the
workman close to him--'Listen, Gorenflot,' said he, in a low voice,
'you are to sleep here to-night; but to-morrow morning you shall have a
passport to take you abroad to a place I will tell you of. I will give
you six thousand francs for your journey. You must live in that town for
ten years; if you find you do not like it, you may settle in another,
but it must be in the same country. Go through Paris and wait there till
I join you. I will there give you an agreement for six thousand francs
more, to be paid to you on your return, provided you have carried out
the conditions of the bargain. For that price you are to keep perfect
silence as to what you have to do this night. To you, Rosalie, I will
secure ten thousand francs, which will not be paid to you till your
wedding day, and on condition of your marrying Gorenflot; but, to get
married, you must hold your tongue. If not, no wedding gift!'
"'Rosalie,' said Madame de Merret, 'come and brush my hair.'
"Her husband quietly walked up and down the room, keeping an eye on the
door, on the mason, and on his wife, but without any insulting display
of suspicion. Gorenflot could not help making some noise. Madame de
Merret seized a moment when he was unloading some bricks, and when her
husband was at the other end of the room to say to Rosalie: 'My dear
child, I will give you a thousand francs a year if only you will tell
Gorenflot to leave a crack at the bottom.' Then she added aloud quite
coolly: 'You had better help him.'
"Monsieur and Madame de Merret were silent all the time while Gorenflot
was walling up the door. This silence was intentional on the husband's
part; he did not wish to give his wife the opportunity of saying
anything with a double meaning. On Madame de Merret's side it was pride
or prudence. When the wall was half built up the cunning mason took
advantage of his master's back being turned to break one of the two
panes in the top of the door with a blow of his pick. By this Madame de
Merret understood that Rosalie had spoken to Gorenflot. They all three
then saw the face of a dark, gloomy-looking man, with black hair and
flaming eyes.
"Before her husband turned round again the poor woman had nodded to the
stranger, to whom the signal was meant to convey, 'Hope.'
"At four o'clock, as the day was dawning, for it was the month of
September, the work was done. The mason was placed in charge of Jean,
and Monsieur de Merret slept in his wife's room.
"Next morning when he got up he said with apparent carelessness, 'Oh,
by the way, I must go to the Maire for the passport.' He put on his hat,
took two or three steps towards the door, paused, and took the crucifix.
His wife was trembling with joy.
"'He will go to Duvivier's,' thought she.
"As soon as he had left, Madame de Merret rang for Rosalie, and then in
a terrible voice she cried: 'The pick! Bring the pick! and set to work.
I saw how Gorenflot did it yesterday; we shall have time to make a gap
and build it up again.'
"In an instant Rosalie had brought her mistress a sort of cleaver; she,
with a vehemence of which no words can give an idea, set to work to
demolish the wall. She had already got out a few bricks, when, turning
to deal a stronger blow than before, she saw behind her Monsieur de
Merret. She fainted away.
"'Lay madame on her bed,' said he coldly.
"Foreseeing what would certainly happen in his absence, he had laid
this trap for his wife; he had merely written to the Maire and sent for
Duvivier. The jeweler arrived just as the disorder in the room had been
repaired.
"'Duvivier,' asked Monsieur de Merret, 'did not you buy some crucifixes
of the Spaniards who passed through the town?'
"'No, monsieur.'
"'Very good; thank you,' said he, flashing a tiger's glare at his wife.
'Jean,' he added, turning to his confidential valet, 'you can serve my
meals here in Madame de Merret's room. She is ill, and I shall not leave
her till she recovers.'
"The cruel man remained in his wife's room for twenty days. During
the earlier time, when there was some little noise in the closet,
and Josephine wanted to intercede for the dying man, he said, without
allowing her to utter a word, 'You swore on the Cross that there was no
one there.'"
After this story all the ladies rose from table, and thus the spell
under which Bianchon had held them was broken. But there were some among
them who had almost shivered at the last words.
ADDENDUM
The following personage appears in other stories of the Human Comedy.
Bianchon, Horace
Father Goriot
The Atheist's Mass
Cesar Birotteau
The Commission in Lunacy
Lost Illusions
A Distinguished Provincial at Paris
A Bachelor's Establishment
The Secrets of a Princess
The Government Clerks
Pierrette
A Study of Woman
Scenes from a Courtesan's Life
Honorine
The Seamy Side of History
The Magic Skin
A Second Home
A Prince of Bohemia
Letters of Two Brides
The Muse of the Department
The Imaginary Mistress
The Middle Classes
Cousin Betty
The Country Parson
In addition, M. Bianchon narrated the following:
Another Study of Woman
End of the Project Gutenberg EBook of La Grande Breteche, by Honore de Balzac
|
What does Madame de Merret ask the mason to do before walling off the closet with bricks?
|
Break a piece of the door that she may look on her lover one last time
| 8,151
|
narrativeqa
|
8k
|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
Where does the Witch live?
|
Witch lives in cave on Atlas Mountains
| 5,397
|
narrativeqa
|
8k
|
This etext was prepared by Sue Asscher <[email protected]>
CRITO
by Plato
Translated by Benjamin Jowett
INTRODUCTION.
The Crito seems intended to exhibit the character of Socrates in one light
only, not as the philosopher, fulfilling a divine mission and trusting in
the will of heaven, but simply as the good citizen, who having been
unjustly condemned is willing to give up his life in obedience to the laws
of the state...
The days of Socrates are drawing to a close; the fatal ship has been seen
off Sunium, as he is informed by his aged friend and contemporary Crito,
who visits him before the dawn has broken; he himself has been warned in a
dream that on the third day he must depart. Time is precious, and Crito
has come early in order to gain his consent to a plan of escape. This can
be easily accomplished by his friends, who will incur no danger in making
the attempt to save him, but will be disgraced for ever if they allow him
to perish. He should think of his duty to his children, and not play into
the hands of his enemies. Money is already provided by Crito as well as by
Simmias and others, and he will have no difficulty in finding friends in
Thessaly and other places.
Socrates is afraid that Crito is but pressing upon him the opinions of the
many: whereas, all his life long he has followed the dictates of reason
only and the opinion of the one wise or skilled man. There was a time when
Crito himself had allowed the propriety of this. And although some one
will say 'the many can kill us,' that makes no difference; but a good life,
in other words, a just and honourable life, is alone to be valued. All
considerations of loss of reputation or injury to his children should be
dismissed: the only question is whether he would be right in attempting to
escape. Crito, who is a disinterested person not having the fear of death
before his eyes, shall answer this for him. Before he was condemned they
had often held discussions, in which they agreed that no man should either
do evil, or return evil for evil, or betray the right. Are these
principles to be altered because the circumstances of Socrates are altered?
Crito admits that they remain the same. Then is his escape consistent with
the maintenance of them? To this Crito is unable or unwilling to reply.
Socrates proceeds:--Suppose the Laws of Athens to come and remonstrate with
him: they will ask 'Why does he seek to overturn them?' and if he replies,
'they have injured him,' will not the Laws answer, 'Yes, but was that the
agreement? Has he any objection to make to them which would justify him in
overturning them? Was he not brought into the world and educated by their
help, and are they not his parents? He might have left Athens and gone
where he pleased, but he has lived there for seventy years more constantly
than any other citizen.' Thus he has clearly shown that he acknowledged
the agreement, which he cannot now break without dishonour to himself and
danger to his friends. Even in the course of the trial he might have
proposed exile as the penalty, but then he declared that he preferred death
to exile. And whither will he direct his footsteps? In any well-ordered
state the Laws will consider him as an enemy. Possibly in a land of
misrule like Thessaly he may be welcomed at first, and the unseemly
narrative of his escape will be regarded by the inhabitants as an amusing
tale. But if he offends them he will have to learn another sort of lesson.
Will he continue to give lectures in virtue? That would hardly be decent.
And how will his children be the gainers if he takes them into Thessaly,
and deprives them of Athenian citizenship? Or if he leaves them behind,
does he expect that they will be better taken care of by his friends
because he is in Thessaly? Will not true friends care for them equally
whether he is alive or dead?
Finally, they exhort him to think of justice first, and of life and
children afterwards. He may now depart in peace and innocence, a sufferer
and not a doer of evil. But if he breaks agreements, and returns evil for
evil, they will be angry with him while he lives; and their brethren the
Laws of the world below will receive him as an enemy. Such is the mystic
voice which is always murmuring in his ears.
That Socrates was not a good citizen was a charge made against him during
his lifetime, which has been often repeated in later ages. The crimes of
Alcibiades, Critias, and Charmides, who had been his pupils, were still
recent in the memory of the now restored democracy. The fact that he had
been neutral in the death-struggle of Athens was not likely to conciliate
popular good-will. Plato, writing probably in the next generation,
undertakes the defence of his friend and master in this particular, not to
the Athenians of his day, but to posterity and the world at large.
Whether such an incident ever really occurred as the visit of Crito and the
proposal of escape is uncertain: Plato could easily have invented far more
than that (Phaedr.); and in the selection of Crito, the aged friend, as the
fittest person to make the proposal to Socrates, we seem to recognize the
hand of the artist. Whether any one who has been subjected by the laws of
his country to an unjust judgment is right in attempting to escape, is a
thesis about which casuists might disagree. Shelley (Prose Works) is of
opinion that Socrates 'did well to die,' but not for the 'sophistical'
reasons which Plato has put into his mouth. And there would be no
difficulty in arguing that Socrates should have lived and preferred to a
glorious death the good which he might still be able to perform. 'A
rhetorician would have had much to say upon that point.' It may be
observed however that Plato never intended to answer the question of
casuistry, but only to exhibit the ideal of patient virtue which refuses to
do the least evil in order to avoid the greatest, and to show his master
maintaining in death the opinions which he had professed in his life. Not
'the world,' but the 'one wise man,' is still the paradox of Socrates in
his last hours. He must be guided by reason, although her conclusions may
be fatal to him. The remarkable sentiment that the wicked can do neither
good nor evil is true, if taken in the sense, which he means, of moral
evil; in his own words, 'they cannot make a man wise or foolish.'
This little dialogue is a perfect piece of dialectic, in which granting the
'common principle,' there is no escaping from the conclusion. It is
anticipated at the beginning by the dream of Socrates and the parody of
Homer. The personification of the Laws, and of their brethren the Laws in
the world below, is one of the noblest and boldest figures of speech which
occur in Plato.
CRITO
by
Plato
Translated by Benjamin Jowett
PERSONS OF THE DIALOGUE: Socrates, Crito.
SCENE: The Prison of Socrates.
SOCRATES: Why have you come at this hour, Crito? it must be quite early.
CRITO: Yes, certainly.
SOCRATES: What is the exact time?
CRITO: The dawn is breaking.
SOCRATES: I wonder that the keeper of the prison would let you in.
CRITO: He knows me because I often come, Socrates; moreover. I have done
him a kindness.
SOCRATES: And are you only just arrived?
CRITO: No, I came some time ago.
SOCRATES: Then why did you sit and say nothing, instead of at once
awakening me?
CRITO: I should not have liked myself, Socrates, to be in such great
trouble and unrest as you are--indeed I should not: I have been watching
with amazement your peaceful slumbers; and for that reason I did not awake
you, because I wished to minimize the pain. I have always thought you to
be of a happy disposition; but never did I see anything like the easy,
tranquil manner in which you bear this calamity.
SOCRATES: Why, Crito, when a man has reached my age he ought not to be
repining at the approach of death.
CRITO: And yet other old men find themselves in similar misfortunes, and
age does not prevent them from repining.
SOCRATES: That is true. But you have not told me why you come at this
early hour.
CRITO: I come to bring you a message which is sad and painful; not, as I
believe, to yourself, but to all of us who are your friends, and saddest of
all to me.
SOCRATES: What? Has the ship come from Delos, on the arrival of which I
am to die?
CRITO: No, the ship has not actually arrived, but she will probably be
here to-day, as persons who have come from Sunium tell me that they have
left her there; and therefore to-morrow, Socrates, will be the last day of
your life.
SOCRATES: Very well, Crito; if such is the will of God, I am willing; but
my belief is that there will be a delay of a day.
CRITO: Why do you think so?
SOCRATES: I will tell you. I am to die on the day after the arrival of
the ship?
CRITO: Yes; that is what the authorities say.
SOCRATES: But I do not think that the ship will be here until to-morrow;
this I infer from a vision which I had last night, or rather only just now,
when you fortunately allowed me to sleep.
CRITO: And what was the nature of the vision?
SOCRATES: There appeared to me the likeness of a woman, fair and comely,
clothed in bright raiment, who called to me and said: O Socrates,
'The third day hence to fertile Phthia shalt thou go.' (Homer, Il.)
CRITO: What a singular dream, Socrates!
SOCRATES: There can be no doubt about the meaning, Crito, I think.
CRITO: Yes; the meaning is only too clear. But, oh! my beloved Socrates,
let me entreat you once more to take my advice and escape. For if you die
I shall not only lose a friend who can never be replaced, but there is
another evil: people who do not know you and me will believe that I might
have saved you if I had been willing to give money, but that I did not
care. Now, can there be a worse disgrace than this--that I should be
thought to value money more than the life of a friend? For the many will
not be persuaded that I wanted you to escape, and that you refused.
SOCRATES: But why, my dear Crito, should we care about the opinion of the
many? Good men, and they are the only persons who are worth considering,
will think of these things truly as they occurred.
CRITO: But you see, Socrates, that the opinion of the many must be
regarded, for what is now happening shows that they can do the greatest
evil to any one who has lost their good opinion.
SOCRATES: I only wish it were so, Crito; and that the many could do the
greatest evil; for then they would also be able to do the greatest good--
and what a fine thing this would be! But in reality they can do neither;
for they cannot make a man either wise or foolish; and whatever they do is
the result of chance.
CRITO: Well, I will not dispute with you; but please to tell me, Socrates,
whether you are not acting out of regard to me and your other friends: are
you not afraid that if you escape from prison we may get into trouble with
the informers for having stolen you away, and lose either the whole or a
great part of our property; or that even a worse evil may happen to us?
Now, if you fear on our account, be at ease; for in order to save you, we
ought surely to run this, or even a greater risk; be persuaded, then, and
do as I say.
SOCRATES: Yes, Crito, that is one fear which you mention, but by no means
the only one.
CRITO: Fear not--there are persons who are willing to get you out of
prison at no great cost; and as for the informers they are far from being
exorbitant in their demands--a little money will satisfy them. My means,
which are certainly ample, are at your service, and if you have a scruple
about spending all mine, here are strangers who will give you the use of
theirs; and one of them, Simmias the Theban, has brought a large sum of
money for this very purpose; and Cebes and many others are prepared to
spend their money in helping you to escape. I say, therefore, do not
hesitate on our account, and do not say, as you did in the court (compare
Apol.), that you will have a difficulty in knowing what to do with yourself
anywhere else. For men will love you in other places to which you may go,
and not in Athens only; there are friends of mine in Thessaly, if you like
to go to them, who will value and protect you, and no Thessalian will give
you any trouble. Nor can I think that you are at all justified, Socrates,
in betraying your own life when you might be saved; in acting thus you are
playing into the hands of your enemies, who are hurrying on your
destruction. And further I should say that you are deserting your own
children; for you might bring them up and educate them; instead of which
you go away and leave them, and they will have to take their chance; and if
they do not meet with the usual fate of orphans, there will be small thanks
to you. No man should bring children into the world who is unwilling to
persevere to the end in their nurture and education. But you appear to be
choosing the easier part, not the better and manlier, which would have been
more becoming in one who professes to care for virtue in all his actions,
like yourself. And indeed, I am ashamed not only of you, but of us who are
your friends, when I reflect that the whole business will be attributed
entirely to our want of courage. The trial need never have come on, or
might have been managed differently; and this last act, or crowning folly,
will seem to have occurred through our negligence and cowardice, who might
have saved you, if we had been good for anything; and you might have saved
yourself, for there was no difficulty at all. See now, Socrates, how sad
and discreditable are the consequences, both to us and you. Make up your
mind then, or rather have your mind already made up, for the time of
deliberation is over, and there is only one thing to be done, which must be
done this very night, and if we delay at all will be no longer practicable
or possible; I beseech you therefore, Socrates, be persuaded by me, and do
as I say.
SOCRATES: Dear Crito, your zeal is invaluable, if a right one; but if
wrong, the greater the zeal the greater the danger; and therefore we ought
to consider whether I shall or shall not do as you say. For I am and
always have been one of those natures who must be guided by reason,
whatever the reason may be which upon reflection appears to me to be the
best; and now that this chance has befallen me, I cannot repudiate my own
words: the principles which I have hitherto honoured and revered I still
honour, and unless we can at once find other and better principles, I am
certain not to agree with you; no, not even if the power of the multitude
could inflict many more imprisonments, confiscations, deaths, frightening
us like children with hobgoblin terrors (compare Apol.). What will be the
fairest way of considering the question? Shall I return to your old
argument about the opinions of men?--we were saying that some of them are
to be regarded, and others not. Now were we right in maintaining this
before I was condemned? And has the argument which was once good now
proved to be talk for the sake of talking--mere childish nonsense? That is
what I want to consider with your help, Crito:--whether, under my present
circumstances, the argument appears to be in any way different or not; and
is to be allowed by me or disallowed. That argument, which, as I believe,
is maintained by many persons of authority, was to the effect, as I was
saying, that the opinions of some men are to be regarded, and of other men
not to be regarded. Now you, Crito, are not going to die to-morrow--at
least, there is no human probability of this, and therefore you are
disinterested and not liable to be deceived by the circumstances in which
you are placed. Tell me then, whether I am right in saying that some
opinions, and the opinions of some men only, are to be valued, and that
other opinions, and the opinions of other men, are not to be valued. I ask
you whether I was right in maintaining this?
CRITO: Certainly.
SOCRATES: The good are to be regarded, and not the bad?
CRITO: Yes.
SOCRATES: And the opinions of the wise are good, and the opinions of the
unwise are evil?
CRITO: Certainly.
SOCRATES: And what was said about another matter? Is the pupil who
devotes himself to the practice of gymnastics supposed to attend to the
praise and blame and opinion of every man, or of one man only--his
physician or trainer, whoever he may be?
CRITO: Of one man only.
SOCRATES: And he ought to fear the censure and welcome the praise of that
one only, and not of the many?
CRITO: Clearly so.
SOCRATES: And he ought to act and train, and eat and drink in the way
which seems good to his single master who has understanding, rather than
according to the opinion of all other men put together?
CRITO: True.
SOCRATES: And if he disobeys and disregards the opinion and approval of
the one, and regards the opinion of the many who have no understanding,
will he not suffer evil?
CRITO: Certainly he will.
SOCRATES: And what will the evil be, whither tending and what affecting,
in the disobedient person?
CRITO: Clearly, affecting the body; that is what is destroyed by the evil.
SOCRATES: Very good; and is not this true, Crito, of other things which we
need not separately enumerate? In questions of just and unjust, fair and
foul, good and evil, which are the subjects of our present consultation,
ought we to follow the opinion of the many and to fear them; or the opinion
of the one man who has understanding? ought we not to fear and reverence
him more than all the rest of the world: and if we desert him shall we not
destroy and injure that principle in us which may be assumed to be improved
by justice and deteriorated by injustice;--there is such a principle?
CRITO: Certainly there is, Socrates.
SOCRATES: Take a parallel instance:--if, acting under the advice of those
who have no understanding, we destroy that which is improved by health and
is deteriorated by disease, would life be worth having? And that which has
been destroyed is--the body?
CRITO: Yes.
SOCRATES: Could we live, having an evil and corrupted body?
CRITO: Certainly not.
SOCRATES: And will life be worth having, if that higher part of man be
destroyed, which is improved by justice and depraved by injustice? Do we
suppose that principle, whatever it may be in man, which has to do with
justice and injustice, to be inferior to the body?
CRITO: Certainly not.
SOCRATES: More honourable than the body?
CRITO: Far more.
SOCRATES: Then, my friend, we must not regard what the many say of us:
but what he, the one man who has understanding of just and unjust, will
say, and what the truth will say. And therefore you begin in error when
you advise that we should regard the opinion of the many about just and
unjust, good and evil, honorable and dishonorable.--'Well,' some one will
say, 'but the many can kill us.'
CRITO: Yes, Socrates; that will clearly be the answer.
SOCRATES: And it is true; but still I find with surprise that the old
argument is unshaken as ever. And I should like to know whether I may say
the same of another proposition--that not life, but a good life, is to be
chiefly valued?
CRITO: Yes, that also remains unshaken.
SOCRATES: And a good life is equivalent to a just and honorable one--that
holds also?
CRITO: Yes, it does.
SOCRATES: From these premisses I proceed to argue the question whether I
ought or ought not to try and escape without the consent of the Athenians:
and if I am clearly right in escaping, then I will make the attempt; but if
not, I will abstain. The other considerations which you mention, of money
and loss of character and the duty of educating one's children, are, I
fear, only the doctrines of the multitude, who would be as ready to restore
people to life, if they were able, as they are to put them to death--and
with as little reason. But now, since the argument has thus far prevailed,
the only question which remains to be considered is, whether we shall do
rightly either in escaping or in suffering others to aid in our escape and
paying them in money and thanks, or whether in reality we shall not do
rightly; and if the latter, then death or any other calamity which may
ensue on my remaining here must not be allowed to enter into the
calculation.
CRITO: I think that you are right, Socrates; how then shall we proceed?
SOCRATES: Let us consider the matter together, and do you either refute me
if you can, and I will be convinced; or else cease, my dear friend, from
repeating to me that I ought to escape against the wishes of the Athenians:
for I highly value your attempts to persuade me to do so, but I may not be
persuaded against my own better judgment. And now please to consider my
first position, and try how you can best answer me.
CRITO: I will.
SOCRATES: Are we to say that we are never intentionally to do wrong, or
that in one way we ought and in another way we ought not to do wrong, or is
doing wrong always evil and dishonorable, as I was just now saying, and as
has been already acknowledged by us? Are all our former admissions which
were made within a few days to be thrown away? And have we, at our age,
been earnestly discoursing with one another all our life long only to
discover that we are no better than children? Or, in spite of the opinion
of the many, and in spite of consequences whether better or worse, shall we
insist on the truth of what was then said, that injustice is always an evil
and dishonour to him who acts unjustly? Shall we say so or not?
CRITO: Yes.
SOCRATES: Then we must do no wrong?
CRITO: Certainly not.
SOCRATES: Nor when injured injure in return, as the many imagine; for we
must injure no one at all? (E.g. compare Rep.)
CRITO: Clearly not.
SOCRATES: Again, Crito, may we do evil?
CRITO: Surely not, Socrates.
SOCRATES: And what of doing evil in return for evil, which is the morality
of the many--is that just or not?
CRITO: Not just.
SOCRATES: For doing evil to another is the same as injuring him?
CRITO: Very true.
SOCRATES: Then we ought not to retaliate or render evil for evil to any
one, whatever evil we may have suffered from him. But I would have you
consider, Crito, whether you really mean what you are saying. For this
opinion has never been held, and never will be held, by any considerable
number of persons; and those who are agreed and those who are not agreed
upon this point have no common ground, and can only despise one another
when they see how widely they differ. Tell me, then, whether you agree
with and assent to my first principle, that neither injury nor retaliation
nor warding off evil by evil is ever right. And shall that be the premiss
of our argument? Or do you decline and dissent from this? For so I have
ever thought, and continue to think; but, if you are of another opinion,
let me hear what you have to say. If, however, you remain of the same mind
as formerly, I will proceed to the next step.
CRITO: You may proceed, for I have not changed my mind.
SOCRATES: Then I will go on to the next point, which may be put in the
form of a question:--Ought a man to do what he admits to be right, or ought
he to betray the right?
CRITO: He ought to do what he thinks right.
SOCRATES: But if this is true, what is the application? In leaving the
prison against the will of the Athenians, do I wrong any? or rather do I
not wrong those whom I ought least to wrong? Do I not desert the
principles which were acknowledged by us to be just--what do you say?
CRITO: I cannot tell, Socrates, for I do not know.
SOCRATES: Then consider the matter in this way:--Imagine that I am about
to play truant (you may call the proceeding by any name which you like),
and the laws and the government come and interrogate me: 'Tell us,
Socrates,' they say; 'what are you about? are you not going by an act of
yours to overturn us--the laws, and the whole state, as far as in you lies?
Do you imagine that a state can subsist and not be overthrown, in which the
decisions of law have no power, but are set aside and trampled upon by
individuals?' What will be our answer, Crito, to these and the like words?
Any one, and especially a rhetorician, will have a good deal to say on
behalf of the law which requires a sentence to be carried out. He will
argue that this law should not be set aside; and shall we reply, 'Yes; but
the state has injured us and given an unjust sentence.' Suppose I say
that?
CRITO: Very good, Socrates.
SOCRATES: 'And was that our agreement with you?' the law would answer; 'or
were you to abide by the sentence of the state?' And if I were to express
my astonishment at their words, the law would probably add: 'Answer,
Socrates, instead of opening your eyes--you are in the habit of asking and
answering questions. Tell us,--What complaint have you to make against us
which justifies you in attempting to destroy us and the state? In the
first place did we not bring you into existence? Your father married your
mother by our aid and begat you. Say whether you have any objection to
urge against those of us who regulate marriage?' None, I should reply.
'Or against those of us who after birth regulate the nurture and education
of children, in which you also were trained? Were not the laws, which have
the charge of education, right in commanding your father to train you in
music and gymnastic?' Right, I should reply. 'Well then, since you were
brought into the world and nurtured and educated by us, can you deny in the
first place that you are our child and slave, as your fathers were before
you? And if this is true you are not on equal terms with us; nor can you
think that you have a right to do to us what we are doing to you. Would
you have any right to strike or revile or do any other evil to your father
or your master, if you had one, because you have been struck or reviled by
him, or received some other evil at his hands?--you would not say this?
And because we think right to destroy you, do you think that you have any
right to destroy us in return, and your country as far as in you lies?
Will you, O professor of true virtue, pretend that you are justified in
this? Has a philosopher like you failed to discover that our country is
more to be valued and higher and holier far than mother or father or any
ancestor, and more to be regarded in the eyes of the gods and of men of
understanding? also to be soothed, and gently and reverently entreated when
angry, even more than a father, and either to be persuaded, or if not
persuaded, to be obeyed? And when we are punished by her, whether with
imprisonment or stripes, the punishment is to be endured in silence; and if
she lead us to wounds or death in battle, thither we follow as is right;
neither may any one yield or retreat or leave his rank, but whether in
battle or in a court of law, or in any other place, he must do what his
city and his country order him; or he must change their view of what is
just: and if he may do no violence to his father or mother, much less may
he do violence to his country.' What answer shall we make to this, Crito?
Do the laws speak truly, or do they not?
CRITO: I think that they do.
SOCRATES: Then the laws will say: 'Consider, Socrates, if we are speaking
truly that in your present attempt you are going to do us an injury. For,
having brought you into the world, and nurtured and educated you, and given
you and every other citizen a share in every good which we had to give, we
further proclaim to any Athenian by the liberty which we allow him, that if
he does not like us when he has become of age and has seen the ways of the
city, and made our acquaintance, he may go where he pleases and take his
goods with him. None of us laws will forbid him or interfere with him.
Any one who does not like us and the city, and who wants to emigrate to a
colony or to any other city, may go where he likes, retaining his property.
But he who has experience of the manner in which we order justice and
administer the state, and still remains, has entered into an implied
contract that he will do as we command him. And he who disobeys us is, as
we maintain, thrice wrong: first, because in disobeying us he is
disobeying his parents; secondly, because we are the authors of his
education; thirdly, because he has made an agreement with us that he will
duly obey our commands; and he neither obeys them nor convinces us that our
commands are unjust; and we do not rudely impose them, but give him the
alternative of obeying or convincing us;--that is what we offer, and he
does neither.
'These are the sort of accusations to which, as we were saying, you,
Socrates, will be exposed if you accomplish your intentions; you, above all
other Athenians.' Suppose now I ask, why I rather than anybody else? they
will justly retort upon me that I above all other men have acknowledged the
agreement. 'There is clear proof,' they will say, 'Socrates, that we and
the city were not displeasing to you. Of all Athenians you have been the
most constant resident in the city, which, as you never leave, you may be
supposed to love (compare Phaedr.). For you never went out of the city
either to see the games, except once when you went to the Isthmus, or to
any other place unless when you were on military service; nor did you
travel as other men do. Nor had you any curiosity to know other states or
their laws: your affections did not go beyond us and our state; we were
your especial favourites, and you acquiesced in our government of you; and
here in this city you begat your children, which is a proof of your
satisfaction. Moreover, you might in the course of the trial, if you had
liked, have fixed the penalty at banishment; the state which refuses to let
you go now would have let you go then. But you pretended that you
preferred death to exile (compare Apol.), and that you were not unwilling
to die. And now you have forgotten these fine sentiments, and pay no
respect to us the laws, of whom you are the destroyer; and are doing what
only a miserable slave would do, running away and turning your back upon
the compacts and agreements which you made as a citizen. And first of all
answer this very question: Are we right in saying that you agreed to be
governed according to us in deed, and not in word only? Is that true or
not?' How shall we answer, Crito? Must we not assent?
CRITO: We cannot help it, Socrates.
SOCRATES: Then will they not say: 'You, Socrates, are breaking the
covenants and agreements which you made with us at your leisure, not in any
haste or under any compulsion or deception, but after you have had seventy
years to think of them, during which time you were at liberty to leave the
city, if we were not to your mind, or if our covenants appeared to you to
be unfair. You had your choice, and might have gone either to Lacedaemon
or Crete, both which states are often praised by you for their good
government, or to some other Hellenic or foreign state. Whereas you, above
all other Athenians, seemed to be so fond of the state, or, in other words,
of us her laws (and who would care about a state which has no laws?), that
you never stirred out of her; the halt, the blind, the maimed, were not
more stationary in her than you were. And now you run away and forsake
your agreements. Not so, Socrates, if you will take our advice; do not
make yourself ridiculous by escaping out of the city.
'For just consider, if you transgress and err in this sort of way, what
good will you do either to yourself or to your friends? That your friends
will be driven into exile and deprived of citizenship, or will lose their
property, is tolerably certain; and you yourself, if you fly to one of the
neighbouring cities, as, for example, Thebes or Megara, both of which are
well governed, will come to them as an enemy, Socrates, and their
government will be against you, and all patriotic citizens will cast an
evil eye upon you as a subverter of the laws, and you will confirm in the
minds of the judges the justice of their own condemnation of you. For he
who is a corrupter of the laws is more than likely to be a corrupter of the
young and foolish portion of mankind. Will you then flee from well-ordered
cities and virtuous men? and is existence worth having on these terms? Or
will you go to them without shame, and talk to them, Socrates? And what
will you say to them? What you say here about virtue and justice and
institutions and laws being the best things among men? Would that be
decent of you? Surely not. But if you go away from well-governed states
to Crito's friends in Thessaly, where there is great disorder and licence,
they will be charmed to hear the tale of your escape from prison, set off
with ludicrous particulars of the manner in which you were wrapped in a
goatskin or some other disguise, and metamorphosed as the manner is of
runaways; but will there be no one to remind you that in your old age you
were not ashamed to violate the most sacred laws from a miserable desire of
a little more life? Perhaps not, if you keep them in a good temper; but if
they are out of temper you will hear many degrading things; you will live,
but how?--as the flatterer of all men, and the servant of all men; and
doing what?--eating and drinking in Thessaly, having gone abroad in order
that you may get a dinner. And where will be your fine sentiments about
justice and virtue? Say that you wish to live for the sake of your
children--you want to bring them up and educate them--will you take them
into Thessaly and deprive them of Athenian citizenship? Is this the
benefit which you will confer upon them? Or are you under the impression
that they will be better cared for and educated here if you are still
alive, although absent from them; for your friends will take care of them?
Do you fancy that if you are an inhabitant of Thessaly they will take care
of them, and if you are an inhabitant of the other world that they will not
take care of them? Nay; but if they who call themselves friends are good
for anything, they will--to be sure they will.
'Listen, then, Socrates, to us who have brought you up. Think not of life
and children first, and of justice afterwards, but of justice first, that
you may be justified before the princes of the world below. For neither
will you nor any that belong to you be happier or holier or juster in this
life, or happier in another, if you do as Crito bids. Now you depart in
innocence, a sufferer and not a doer of evil; a victim, not of the laws,
but of men. But if you go forth, returning evil for evil, and injury for
injury, breaking the covenants and agreements which you have made with us,
and wronging those whom you ought least of all to wrong, that is to say,
yourself, your friends, your country, and us, we shall be angry with you
while you live, and our brethren, the laws in the world below, will receive
you as an enemy; for they will know that you have done your best to destroy
us. Listen, then, to us and not to Crito.'
This, dear Crito, is the voice which I seem to hear murmuring in my ears,
like the sound of the flute in the ears of the mystic; that voice, I say,
is humming in my ears, and prevents me from hearing any other. And I know
that anything more which you may say will be vain. Yet speak, if you have
anything to say.
CRITO: I have nothing to say, Socrates.
SOCRATES: Leave me then, Crito, to fulfil the will of God, and to follow
whither he leads.
|
What would Socrates turn into if he agreed to break out of prison?
|
An outlaw
| 6,598
|
narrativeqa
|
8k
|
Produced by Sue Asscher
The Witch of Atlas
by
Percy Bysshe Shelley
TO MARY
(ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE
SCORE OF ITS CONTAINING NO HUMAN INTEREST).
1.
How, my dear Mary,--are you critic-bitten
(For vipers kill, though dead) by some review,
That you condemn these verses I have written,
Because they tell no story, false or true?
What, though no mice are caught by a young kitten, _5
May it not leap and play as grown cats do,
Till its claws come? Prithee, for this one time,
Content thee with a visionary rhyme.
2.
What hand would crush the silken-winged fly,
The youngest of inconstant April's minions, _10
Because it cannot climb the purest sky,
Where the swan sings, amid the sun's dominions?
Not thine. Thou knowest 'tis its doom to die,
When Day shall hide within her twilight pinions
The lucent eyes, and the eternal smile, _15
Serene as thine, which lent it life awhile.
3.
To thy fair feet a winged Vision came,
Whose date should have been longer than a day,
And o'er thy head did beat its wings for fame,
And in thy sight its fading plumes display; _20
The watery bow burned in the evening flame.
But the shower fell, the swift Sun went his way--
And that is dead.--O, let me not believe
That anything of mine is fit to live!
4.
Wordsworth informs us he was nineteen years _25
Considering and retouching Peter Bell;
Watering his laurels with the killing tears
Of slow, dull care, so that their roots to Hell
Might pierce, and their wide branches blot the spheres
Of Heaven, with dewy leaves and flowers; this well _30
May be, for Heaven and Earth conspire to foil
The over-busy gardener's blundering toil.
5.
My Witch indeed is not so sweet a creature
As Ruth or Lucy, whom his graceful praise
Clothes for our grandsons--but she matches Peter, _35
Though he took nineteen years, and she three days
In dressing. Light the vest of flowing metre
She wears; he, proud as dandy with his stays,
Has hung upon his wiry limbs a dress
Like King Lear's 'looped and windowed raggedness.' _40
6.
If you strip Peter, you will see a fellow
Scorched by Hell's hyperequatorial climate
Into a kind of a sulphureous yellow:
A lean mark, hardly fit to fling a rhyme at;
In shape a Scaramouch, in hue Othello. _45
If you unveil my Witch, no priest nor primate
Can shrive you of that sin,--if sin there be
In love, when it becomes idolatry.
THE WITCH OF ATLAS.
1.
Before those cruel Twins, whom at one birth
Incestuous Change bore to her father Time, _50
Error and Truth, had hunted from the Earth
All those bright natures which adorned its prime,
And left us nothing to believe in, worth
The pains of putting into learned rhyme,
A lady-witch there lived on Atlas' mountain _55
Within a cavern, by a secret fountain.
2.
Her mother was one of the Atlantides:
The all-beholding Sun had ne'er beholden
In his wide voyage o'er continents and seas
So fair a creature, as she lay enfolden _60
In the warm shadow of her loveliness;--
He kissed her with his beams, and made all golden
The chamber of gray rock in which she lay--
She, in that dream of joy, dissolved away.
3.
'Tis said, she first was changed into a vapour, _65
And then into a cloud, such clouds as flit,
Like splendour-winged moths about a taper,
Round the red west when the sun dies in it:
And then into a meteor, such as caper
On hill-tops when the moon is in a fit: _70
Then, into one of those mysterious stars
Which hide themselves between the Earth and Mars.
4.
Ten times the Mother of the Months had bent
Her bow beside the folding-star, and bidden
With that bright sign the billows to indent _75
The sea-deserted sand--like children chidden,
At her command they ever came and went--
Since in that cave a dewy splendour hidden
Took shape and motion: with the living form
Of this embodied Power, the cave grew warm. _80
5.
A lovely lady garmented in light
From her own beauty--deep her eyes, as are
Two openings of unfathomable night
Seen through a Temple's cloven roof--her hair
Dark--the dim brain whirls dizzy with delight. _85
Picturing her form; her soft smiles shone afar,
And her low voice was heard like love, and drew
All living things towards this wonder new.
6.
And first the spotted cameleopard came,
And then the wise and fearless elephant; _90
Then the sly serpent, in the golden flame
Of his own volumes intervolved;--all gaunt
And sanguine beasts her gentle looks made tame.
They drank before her at her sacred fount;
And every beast of beating heart grew bold, _95
Such gentleness and power even to behold.
7.
The brinded lioness led forth her young,
That she might teach them how they should forego
Their inborn thirst of death; the pard unstrung
His sinews at her feet, and sought to know _100
With looks whose motions spoke without a tongue
How he might be as gentle as the doe.
The magic circle of her voice and eyes
All savage natures did imparadise.
8.
And old Silenus, shaking a green stick _105
Of lilies, and the wood-gods in a crew
Came, blithe, as in the olive copses thick
Cicadae are, drunk with the noonday dew:
And Dryope and Faunus followed quick,
Teasing the God to sing them something new; _110
Till in this cave they found the lady lone,
Sitting upon a seat of emerald stone.
9.
And universal Pan, 'tis said, was there,
And though none saw him,--through the adamant
Of the deep mountains, through the trackless air, _115
And through those living spirits, like a want,
He passed out of his everlasting lair
Where the quick heart of the great world doth pant,
And felt that wondrous lady all alone,--
And she felt him, upon her emerald throne. _120
10.
And every nymph of stream and spreading tree,
And every shepherdess of Ocean's flocks,
Who drives her white waves over the green sea,
And Ocean with the brine on his gray locks,
And quaint Priapus with his company, _125
All came, much wondering how the enwombed rocks
Could have brought forth so beautiful a birth;--
Her love subdued their wonder and their mirth.
11.
The herdsmen and the mountain maidens came,
And the rude kings of pastoral Garamant-- _130
Their spirits shook within them, as a flame
Stirred by the air under a cavern gaunt:
Pigmies, and Polyphemes, by many a name,
Centaurs, and Satyrs, and such shapes as haunt
Wet clefts,--and lumps neither alive nor dead, _135
Dog-headed, bosom-eyed, and bird-footed.
12.
For she was beautiful--her beauty made
The bright world dim, and everything beside
Seemed like the fleeting image of a shade:
No thought of living spirit could abide, _140
Which to her looks had ever been betrayed,
On any object in the world so wide,
On any hope within the circling skies,
But on her form, and in her inmost eyes.
13.
Which when the lady knew, she took her spindle _145
And twined three threads of fleecy mist, and three
Long lines of light, such as the dawn may kindle
The clouds and waves and mountains with; and she
As many star-beams, ere their lamps could dwindle
In the belated moon, wound skilfully; _150
And with these threads a subtle veil she wove--
A shadow for the splendour of her love.
14.
The deep recesses of her odorous dwelling
Were stored with magic treasures--sounds of air,
Which had the power all spirits of compelling, _155
Folded in cells of crystal silence there;
Such as we hear in youth, and think the feeling
Will never die--yet ere we are aware,
The feeling and the sound are fled and gone,
And the regret they leave remains alone. _160
15.
And there lay Visions swift, and sweet, and quaint,
Each in its thin sheath, like a chrysalis,
Some eager to burst forth, some weak and faint
With the soft burthen of intensest bliss.
It was its work to bear to many a saint _165
Whose heart adores the shrine which holiest is,
Even Love's:--and others white, green, gray, and black,
And of all shapes--and each was at her beck.
16.
And odours in a kind of aviary
Of ever-blooming Eden-trees she kept, _170
Clipped in a floating net, a love-sick Fairy
Had woven from dew-beams while the moon yet slept;
As bats at the wired window of a dairy,
They beat their vans; and each was an adept,
When loosed and missioned, making wings of winds, _175
To stir sweet thoughts or sad, in destined minds.
17.
And liquors clear and sweet, whose healthful might
Could medicine the sick soul to happy sleep,
And change eternal death into a night
Of glorious dreams--or if eyes needs must weep, _180
Could make their tears all wonder and delight,
She in her crystal vials did closely keep:
If men could drink of those clear vials, 'tis said
The living were not envied of the dead.
18.
Her cave was stored with scrolls of strange device, _185
The works of some Saturnian Archimage,
Which taught the expiations at whose price
Men from the Gods might win that happy age
Too lightly lost, redeeming native vice;
And which might quench the Earth-consuming rage _190
Of gold and blood--till men should live and move
Harmonious as the sacred stars above;
19.
And how all things that seem untameable,
Not to be checked and not to be confined,
Obey the spells of Wisdom's wizard skill; _195
Time, earth, and fire--the ocean and the wind,
And all their shapes--and man's imperial will;
And other scrolls whose writings did unbind
The inmost lore of Love--let the profane
Tremble to ask what secrets they contain. _200
20.
And wondrous works of substances unknown,
To which the enchantment of her father's power
Had changed those ragged blocks of savage stone,
Were heaped in the recesses of her bower;
Carved lamps and chalices, and vials which shone _205
In their own golden beams--each like a flower,
Out of whose depth a fire-fly shakes his light
Under a cypress in a starless night.
21.
At first she lived alone in this wild home,
And her own thoughts were each a minister, _210
Clothing themselves, or with the ocean foam,
Or with the wind, or with the speed of fire,
To work whatever purposes might come
Into her mind; such power her mighty Sire
Had girt them with, whether to fly or run, _215
Through all the regions which he shines upon.
22.
The Ocean-nymphs and Hamadryades,
Oreads and Naiads, with long weedy locks,
Offered to do her bidding through the seas,
Under the earth, and in the hollow rocks, _220
And far beneath the matted roots of trees,
And in the gnarled heart of stubborn oaks,
So they might live for ever in the light
Of her sweet presence--each a satellite.
23.
'This may not be,' the wizard maid replied; _225
'The fountains where the Naiades bedew
Their shining hair, at length are drained and dried;
The solid oaks forget their strength, and strew
Their latest leaf upon the mountains wide;
The boundless ocean like a drop of dew _230
Will be consumed--the stubborn centre must
Be scattered, like a cloud of summer dust.
24.
'And ye with them will perish, one by one;--
If I must sigh to think that this shall be,
If I must weep when the surviving Sun _235
Shall smile on your decay--oh, ask not me
To love you till your little race is run;
I cannot die as ye must--over me
Your leaves shall glance--the streams in which ye dwell
Shall be my paths henceforth, and so--farewell!'-- _240
25.
She spoke and wept:--the dark and azure well
Sparkled beneath the shower of her bright tears,
And every little circlet where they fell
Flung to the cavern-roof inconstant spheres
And intertangled lines of light:--a knell _245
Of sobbing voices came upon her ears
From those departing Forms, o'er the serene
Of the white streams and of the forest green.
26.
All day the wizard lady sate aloof,
Spelling out scrolls of dread antiquity, _250
Under the cavern's fountain-lighted roof;
Or broidering the pictured poesy
Of some high tale upon her growing woof,
Which the sweet splendour of her smiles could dye
In hues outshining heaven--and ever she _255
Added some grace to the wrought poesy.
27.
While on her hearth lay blazing many a piece
Of sandal wood, rare gums, and cinnamon;
Men scarcely know how beautiful fire is--
Each flame of it is as a precious stone _260
Dissolved in ever-moving light, and this
Belongs to each and all who gaze upon.
The Witch beheld it not, for in her hand
She held a woof that dimmed the burning brand.
28.
This lady never slept, but lay in trance _265
All night within the fountain--as in sleep.
Its emerald crags glowed in her beauty's glance;
Through the green splendour of the water deep
She saw the constellations reel and dance
Like fire-flies--and withal did ever keep _270
The tenour of her contemplations calm,
With open eyes, closed feet, and folded palm.
29.
And when the whirlwinds and the clouds descended
From the white pinnacles of that cold hill,
She passed at dewfall to a space extended, _275
Where in a lawn of flowering asphodel
Amid a wood of pines and cedars blended,
There yawned an inextinguishable well
Of crimson fire--full even to the brim,
And overflowing all the margin trim. _280
30.
Within the which she lay when the fierce war
Of wintry winds shook that innocuous liquor
In many a mimic moon and bearded star
O'er woods and lawns;--the serpent heard it flicker
In sleep, and dreaming still, he crept afar-- _285
And when the windless snow descended thicker
Than autumn leaves, she watched it as it came
Melt on the surface of the level flame.
31.
She had a boat, which some say Vulcan wrought
For Venus, as the chariot of her star; _290
But it was found too feeble to be fraught
With all the ardours in that sphere which are,
And so she sold it, and Apollo bought
And gave it to this daughter: from a car
Changed to the fairest and the lightest boat _295
Which ever upon mortal stream did float.
32.
And others say, that, when but three hours old,
The first-born Love out of his cradle lept,
And clove dun Chaos with his wings of gold,
And like a horticultural adept, _300
Stole a strange seed, and wrapped it up in mould,
And sowed it in his mother's star, and kept
Watering it all the summer with sweet dew,
And with his wings fanning it as it grew.
33.
The plant grew strong and green, the snowy flower _305
Fell, and the long and gourd-like fruit began
To turn the light and dew by inward power
To its own substance; woven tracery ran
Of light firm texture, ribbed and branching, o'er
The solid rind, like a leaf's veined fan-- _310
Of which Love scooped this boat--and with soft motion
Piloted it round the circumfluous ocean.
34.
This boat she moored upon her fount, and lit
A living spirit within all its frame,
Breathing the soul of swiftness into it. _315
Couched on the fountain like a panther tame,
One of the twain at Evan's feet that sit--
Or as on Vesta's sceptre a swift flame--
Or on blind Homer's heart a winged thought,--
In joyous expectation lay the boat. _320
35.
Then by strange art she kneaded fire and snow
Together, tempering the repugnant mass
With liquid love--all things together grow
Through which the harmony of love can pass;
And a fair Shape out of her hands did flow-- _325
A living Image, which did far surpass
In beauty that bright shape of vital stone
Which drew the heart out of Pygmalion.
36.
A sexless thing it was, and in its growth
It seemed to have developed no defect _330
Of either sex, yet all the grace of both,--
In gentleness and strength its limbs were decked;
The bosom swelled lightly with its full youth,
The countenance was such as might select
Some artist that his skill should never die, _335
Imaging forth such perfect purity.
37.
From its smooth shoulders hung two rapid wings,
Fit to have borne it to the seventh sphere,
Tipped with the speed of liquid lightenings,
Dyed in the ardours of the atmosphere: _340
She led her creature to the boiling springs
Where the light boat was moored, and said: 'Sit here!'
And pointed to the prow, and took her seat
Beside the rudder, with opposing feet.
38.
And down the streams which clove those mountains vast, _345
Around their inland islets, and amid
The panther-peopled forests whose shade cast
Darkness and odours, and a pleasure hid
In melancholy gloom, the pinnace passed;
By many a star-surrounded pyramid _350
Of icy crag cleaving the purple sky,
And caverns yawning round unfathomably.
39.
The silver noon into that winding dell,
With slanted gleam athwart the forest tops,
Tempered like golden evening, feebly fell; _355
A green and glowing light, like that which drops
From folded lilies in which glow-worms dwell,
When Earth over her face Night's mantle wraps;
Between the severed mountains lay on high,
Over the stream, a narrow rift of sky. _360
40.
And ever as she went, the Image lay
With folded wings and unawakened eyes;
And o'er its gentle countenance did play
The busy dreams, as thick as summer flies,
Chasing the rapid smiles that would not stay, _365
And drinking the warm tears, and the sweet sighs
Inhaling, which, with busy murmur vain,
They had aroused from that full heart and brain.
41.
And ever down the prone vale, like a cloud
Upon a stream of wind, the pinnace went: _370
Now lingering on the pools, in which abode
The calm and darkness of the deep content
In which they paused; now o'er the shallow road
Of white and dancing waters, all besprent
With sand and polished pebbles:--mortal boat _375
In such a shallow rapid could not float.
42.
And down the earthquaking cataracts which shiver
Their snow-like waters into golden air,
Or under chasms unfathomable ever
Sepulchre them, till in their rage they tear _380
A subterranean portal for the river,
It fled--the circling sunbows did upbear
Its fall down the hoar precipice of spray,
Lighting it far upon its lampless way.
43.
And when the wizard lady would ascend _385
The labyrinths of some many-winding vale,
Which to the inmost mountain upward tend--
She called 'Hermaphroditus!'--and the pale
And heavy hue which slumber could extend
Over its lips and eyes, as on the gale _390
A rapid shadow from a slope of grass,
Into the darkness of the stream did pass.
44.
And it unfurled its heaven-coloured pinions,
With stars of fire spotting the stream below;
And from above into the Sun's dominions _395
Flinging a glory, like the golden glow
In which Spring clothes her emerald-winged minions,
All interwoven with fine feathery snow
And moonlight splendour of intensest rime,
With which frost paints the pines in winter time. _400
45.
And then it winnowed the Elysian air
Which ever hung about that lady bright,
With its aethereal vans--and speeding there,
Like a star up the torrent of the night,
Or a swift eagle in the morning glare _405
Breasting the whirlwind with impetuous flight,
The pinnace, oared by those enchanted wings,
Clove the fierce streams towards their upper springs.
46.
The water flashed, like sunlight by the prow
Of a noon-wandering meteor flung to Heaven; _410
The still air seemed as if its waves did flow
In tempest down the mountains; loosely driven
The lady's radiant hair streamed to and fro:
Beneath, the billows having vainly striven
Indignant and impetuous, roared to feel _415
The swift and steady motion of the keel.
47.
Or, when the weary moon was in the wane,
Or in the noon of interlunar night,
The lady-witch in visions could not chain
Her spirit; but sailed forth under the light _420
Of shooting stars, and bade extend amain
Its storm-outspeeding wings, the Hermaphrodite;
She to the Austral waters took her way,
Beyond the fabulous Thamondocana,--
48.
Where, like a meadow which no scythe has shaven, _425
Which rain could never bend, or whirl-blast shake,
With the Antarctic constellations paven,
Canopus and his crew, lay the Austral lake--
There she would build herself a windless haven
Out of the clouds whose moving turrets make _430
The bastions of the storm, when through the sky
The spirits of the tempest thundered by:
49.
A haven beneath whose translucent floor
The tremulous stars sparkled unfathomably,
And around which the solid vapours hoar, _435
Based on the level waters, to the sky
Lifted their dreadful crags, and like a shore
Of wintry mountains, inaccessibly
Hemmed in with rifts and precipices gray,
And hanging crags, many a cove and bay. _440
50.
And whilst the outer lake beneath the lash
Of the wind's scourge, foamed like a wounded thing,
And the incessant hail with stony clash
Ploughed up the waters, and the flagging wing
Of the roused cormorant in the lightning flash _445
Looked like the wreck of some wind-wandering
Fragment of inky thunder-smoke--this haven
Was as a gem to copy Heaven engraven,--
51.
On which that lady played her many pranks,
Circling the image of a shooting star, _450
Even as a tiger on Hydaspes' banks
Outspeeds the antelopes which speediest are,
In her light boat; and many quips and cranks
She played upon the water, till the car
Of the late moon, like a sick matron wan, _455
To journey from the misty east began.
52.
And then she called out of the hollow turrets
Of those high clouds, white, golden and vermilion,
The armies of her ministering spirits--
In mighty legions, million after million, _460
They came, each troop emblazoning its merits
On meteor flags; and many a proud pavilion
Of the intertexture of the atmosphere
They pitched upon the plain of the calm mere.
53.
They framed the imperial tent of their great Queen _465
Of woven exhalations, underlaid
With lambent lightning-fire, as may be seen
A dome of thin and open ivory inlaid
With crimson silk--cressets from the serene
Hung there, and on the water for her tread _470
A tapestry of fleece-like mist was strewn,
Dyed in the beams of the ascending moon.
54.
And on a throne o'erlaid with starlight, caught
Upon those wandering isles of aery dew,
Which highest shoals of mountain shipwreck not, _475
She sate, and heard all that had happened new
Between the earth and moon, since they had brought
The last intelligence--and now she grew
Pale as that moon, lost in the watery night--
And now she wept, and now she laughed outright. _480
55.
These were tame pleasures; she would often climb
The steepest ladder of the crudded rack
Up to some beaked cape of cloud sublime,
And like Arion on the dolphin's back
Ride singing through the shoreless air;--oft-time _485
Following the serpent lightning's winding track,
She ran upon the platforms of the wind,
And laughed to hear the fire-balls roar behind.
56.
And sometimes to those streams of upper air
Which whirl the earth in its diurnal round, _490
She would ascend, and win the spirits there
To let her join their chorus. Mortals found
That on those days the sky was calm and fair,
And mystic snatches of harmonious sound
Wandered upon the earth where'er she passed, _495
And happy thoughts of hope, too sweet to last.
57.
But her choice sport was, in the hours of sleep,
To glide adown old Nilus, where he threads
Egypt and Aethiopia, from the steep
Of utmost Axume, until he spreads, _500
Like a calm flock of silver-fleeced sheep,
His waters on the plain: and crested heads
Of cities and proud temples gleam amid,
And many a vapour-belted pyramid.
58.
By Moeris and the Mareotid lakes, _505
Strewn with faint blooms like bridal chamber floors,
Where naked boys bridling tame water-snakes,
Or charioteering ghastly alligators,
Had left on the sweet waters mighty wakes
Of those huge forms--within the brazen doors _510
Of the great Labyrinth slept both boy and beast,
Tired with the pomp of their Osirian feast.
59.
And where within the surface of the river
The shadows of the massy temples lie,
And never are erased--but tremble ever _515
Like things which every cloud can doom to die,
Through lotus-paven canals, and wheresoever
The works of man pierced that serenest sky
With tombs, and towers, and fanes, 'twas her delight
To wander in the shadow of the night. _520
60.
With motion like the spirit of that wind
Whose soft step deepens slumber, her light feet
Passed through the peopled haunts of humankind.
Scattering sweet visions from her presence sweet,
Through fane, and palace-court, and labyrinth mined _525
With many a dark and subterranean street
Under the Nile, through chambers high and deep
She passed, observing mortals in their sleep.
61.
A pleasure sweet doubtless it was to see
Mortals subdued in all the shapes of sleep. _530
Here lay two sister twins in infancy;
There, a lone youth who in his dreams did weep;
Within, two lovers linked innocently
In their loose locks which over both did creep
Like ivy from one stem;--and there lay calm _535
Old age with snow-bright hair and folded palm.
62.
But other troubled forms of sleep she saw,
Not to be mirrored in a holy song--
Distortions foul of supernatural awe,
And pale imaginings of visioned wrong; _540
And all the code of Custom's lawless law
Written upon the brows of old and young:
'This,' said the wizard maiden, 'is the strife
Which stirs the liquid surface of man's life.'
63.
And little did the sight disturb her soul.-- _545
We, the weak mariners of that wide lake
Where'er its shores extend or billows roll,
Our course unpiloted and starless make
O'er its wild surface to an unknown goal:--
But she in the calm depths her way could take, _550
Where in bright bowers immortal forms abide
Beneath the weltering of the restless tide.
64.
And she saw princes couched under the glow
Of sunlike gems; and round each temple-court
In dormitories ranged, row after row, _555
She saw the priests asleep--all of one sort--
For all were educated to be so.--
The peasants in their huts, and in the port
The sailors she saw cradled on the waves,
And the dead lulled within their dreamless graves. _560
65.
And all the forms in which those spirits lay
Were to her sight like the diaphanous
Veils, in which those sweet ladies oft array
Their delicate limbs, who would conceal from us
Only their scorn of all concealment: they _565
Move in the light of their own beauty thus.
But these and all now lay with sleep upon them,
And little thought a Witch was looking on them.
66.
She, all those human figures breathing there,
Beheld as living spirits--to her eyes _570
The naked beauty of the soul lay bare,
And often through a rude and worn disguise
She saw the inner form most bright and fair--
And then she had a charm of strange device,
Which, murmured on mute lips with tender tone, _575
Could make that spirit mingle with her own.
67.
Alas! Aurora, what wouldst thou have given
For such a charm when Tithon became gray?
Or how much, Venus, of thy silver heaven
Wouldst thou have yielded, ere Proserpina _580
Had half (oh! why not all?) the debt forgiven
Which dear Adonis had been doomed to pay,
To any witch who would have taught you it?
The Heliad doth not know its value yet.
68.
'Tis said in after times her spirit free _585
Knew what love was, and felt itself alone--
But holy Dian could not chaster be
Before she stooped to kiss Endymion,
Than now this lady--like a sexless bee
Tasting all blossoms, and confined to none, _590
Among those mortal forms, the wizard-maiden
Passed with an eye serene and heart unladen.
69.
To those she saw most beautiful, she gave
Strange panacea in a crystal bowl:--
They drank in their deep sleep of that sweet wave, _595
And lived thenceforward as if some control,
Mightier than life, were in them; and the grave
Of such, when death oppressed the weary soul,
Was as a green and overarching bower
Lit by the gems of many a starry flower. _600
70.
For on the night when they were buried, she
Restored the embalmers' ruining, and shook
The light out of the funeral lamps, to be
A mimic day within that deathy nook;
And she unwound the woven imagery _605
Of second childhood's swaddling bands, and took
The coffin, its last cradle, from its niche,
And threw it with contempt into a ditch.
71.
And there the body lay, age after age.
Mute, breathing, beating, warm, and undecaying, _610
Like one asleep in a green hermitage,
With gentle smiles about its eyelids playing,
And living in its dreams beyond the rage
Of death or life; while they were still arraying
In liveries ever new, the rapid, blind _615
And fleeting generations of mankind.
72.
And she would write strange dreams upon the brain
Of those who were less beautiful, and make
All harsh and crooked purposes more vain
Than in the desert is the serpent's wake _620
Which the sand covers--all his evil gain
The miser in such dreams would rise and shake
Into a beggar's lap;--the lying scribe
Would his own lies betray without a bribe.
73.
The priests would write an explanation full, _625
Translating hieroglyphics into Greek,
How the God Apis really was a bull,
And nothing more; and bid the herald stick
The same against the temple doors, and pull
The old cant down; they licensed all to speak _630
Whate'er they thought of hawks, and cats, and geese,
By pastoral letters to each diocese.
74.
The king would dress an ape up in his crown
And robes, and seat him on his glorious seat,
And on the right hand of the sunlike throne _635
Would place a gaudy mock-bird to repeat
The chatterings of the monkey.--Every one
Of the prone courtiers crawled to kiss the feet
Of their great Emperor, when the morning came,
And kissed--alas, how many kiss the same! _640
75.
The soldiers dreamed that they were blacksmiths, and
Walked out of quarters in somnambulism;
Round the red anvils you might see them stand
Like Cyclopses in Vulcan's sooty abysm,
Beating their swords to ploughshares;--in a band _645
The gaolers sent those of the liberal schism
Free through the streets of Memphis, much, I wis,
To the annoyance of king Amasis.
76.
And timid lovers who had been so coy,
They hardly knew whether they loved or not, _650
Would rise out of their rest, and take sweet joy,
To the fulfilment of their inmost thought;
And when next day the maiden and the boy
Met one another, both, like sinners caught,
Blushed at the thing which each believed was done _655
Only in fancy--till the tenth moon shone;
77.
And then the Witch would let them take no ill:
Of many thousand schemes which lovers find,
The Witch found one,--and so they took their fill
Of happiness in marriage warm and kind. _660
Friends who, by practice of some envious skill,
Were torn apart--a wide wound, mind from mind!--
She did unite again with visions clear
Of deep affection and of truth sincere.
80.
These were the pranks she played among the cities _665
Of mortal men, and what she did to Sprites
And Gods, entangling them in her sweet ditties
To do her will, and show their subtle sleights,
I will declare another time; for it is
A tale more fit for the weird winter nights _670
Than for these garish summer days, when we
Scarcely believe much more than we can see.
End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley
|
What of mankind was the Witch able to perceive?
|
The fears and desires of mankind.
| 5,401
|
narrativeqa
|
8k
|
Introduction
Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2 . However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies BIBREF3 , BIBREF4 show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. Therefore, we believe that the study of summarization with similarly structured outputs is an important extension of the traditional task.
A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 ). Introduced in 1972 as a teaching tool BIBREF6 , concept maps have found many applications in education BIBREF7 , BIBREF8 , for writing assistance BIBREF9 or to structure information repositories BIBREF10 , BIBREF11 . For summarization, concept maps make it possible to represent a summary concisely and clearly reveal relationships. Moreover, we see a second interesting use case that goes beyond the capabilities of textual summaries: When concepts and relations are linked to corresponding locations in the documents they have been extracted from, the graph can be used to navigate in a document collection, similar to a table of contents. An implementation of this idea has been recently described by BIBREF12 .
The corresponding task that we propose is concept-map-based MDS, the summarization of a document cluster in the form of a concept map. In order to develop and evaluate methods for the task, gold-standard corpora are necessary, but no suitable corpus is available. The manual creation of such a dataset is very time-consuming, as the annotation includes many subtasks. In particular, an annotator would need to manually identify all concepts in the documents, while only a few of them will eventually end up in the summary.
To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora. We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).
As a crucial step of the corpus creation, we developed a new crowdsourcing scheme called low-context importance annotation. In contrast to traditional approaches, it allows us to determine important elements in a document cluster without requiring annotators to read all documents, making it feasible to crowdsource the task and overcome quality issues observed in previous work BIBREF13 . We show that the approach creates reliable data for our focused summarization scenario and, when tested on traditional summarization corpora, creates annotations that are similar to those obtained by earlier efforts.
To summarize, we make the following contributions: (1) We propose a novel task, concept-map-based MDS (§ SECREF2 ), (2) present a new crowdsourcing scheme to create reference summaries (§ SECREF4 ), (3) publish a new dataset for the proposed task (§ SECREF5 ) and (4) provide an evaluation protocol and baseline (§ SECREF7 ). We make these resources publicly available under a permissive license.
Task
Concept-map-based MDS is defined as follows: Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected.
We define a concept map as a labeled graph showing concepts as nodes and relationships between them as edges. Labels are arbitrary sequences of tokens taken from the documents, making the summarization task extractive. A concept can be an entity, abstract idea, event or activity, designated by its unique label. Good maps should be propositionally coherent, meaning that every relation together with the two connected concepts form a meaningful proposition.
The task is complex, consisting of several interdependent subtasks. One has to extract appropriate labels for concepts and relations and recognize different expressions that refer to the same concept across multiple documents. Further, one has to select the most important concepts and relations for the summary and finally organize them in a graph satisfying the connectedness and size constraints.
Related Work
Some attempts have been made to automatically construct concept maps from text, working with either single documents BIBREF14 , BIBREF9 , BIBREF15 , BIBREF16 or document clusters BIBREF17 , BIBREF18 , BIBREF19 . These approaches extract concept and relation labels from syntactic structures and connect them to build a concept map. However, common task definitions and comparable evaluations are missing. In addition, only a few of them, namely Villalon.2012 and Valerio.2006, define summarization as their goal and try to compress the input to a substantially smaller size. Our newly proposed task and the created large-cluster dataset fill these gaps as they emphasize the summarization aspect of the task.
For the subtask of selecting summary-worthy concepts and relations, techniques developed for traditional summarization BIBREF20 and keyphrase extraction BIBREF21 are related and applicable. Approaches that build graphs of propositions to create a summary BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 seem to be particularly related, however, there is one important difference: While they use graphs as an intermediate representation from which a textual summary is then generated, the goal of the proposed task is to create a graph that is directly interpretable and useful for a user. In contrast, these intermediate graphs, e.g. AMR, are hardly useful for a typical, non-linguist user.
For traditional summarization, the most well-known datasets emerged out of the DUC and TAC competitions. They provide clusters of news articles with gold-standard summaries. Extending these efforts, several more specialized corpora have been created: With regard to size, Nakano.2010 present a corpus of summaries for large-scale collections of web pages. Recently, corpora with more heterogeneous documents have been suggested, e.g. BIBREF26 and BIBREF27 . The corpus we present combines these aspects, as it has large clusters of heterogeneous documents, and provides a necessary benchmark to evaluate the proposed task.
For concept map generation, one corpus with human-created summary concept maps for student essays has been created BIBREF28 . In contrast to our corpus, it only deals with single documents, requires a two orders of magnitude smaller amount of compression of the input and is not publicly available .
Other types of information representation that also model concepts and their relationships are knowledge bases, such as Freebase BIBREF29 , and ontologies. However, they both differ in important aspects: Whereas concept maps follow an open label paradigm and are meant to be interpretable by humans, knowledge bases and ontologies are usually more strictly typed and made to be machine-readable. Moreover, approaches to automatically construct them from text typically try to extract as much information as possible, while we want to summarize a document.
Low-Context Importance Annotation
Lloret.2013 describe several experiments to crowdsource reference summaries. Workers are asked to read 10 documents and then select 10 summary sentences from them for a reward of $0.05. They discovered several challenges, including poor work quality and the subjectiveness of the annotation task, indicating that crowdsourcing is not useful for this purpose.
To overcome these issues, we introduce a new task design, low-context importance annotation, to determine summary-worthy parts of documents. Compared to Lloret et al.'s approach, it is more in line with crowdsourcing best practices, as the tasks are simple, intuitive and small BIBREF30 and workers receive reasonable payment BIBREF31 . Most importantly, it is also much more efficient and scalable, as it does not require workers to read all documents in a cluster.
Task Design
We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ).
In preliminary tests, we found that this design, despite the minimal context, works reasonably on our focused clusters on common educational topics. For instance, consider Figure FIGREF4 : One can easily say that P1 is more important than P2 without reading the documents.
We distinguish two task variants:
Instead of enforcing binary importance decisions, we use a 5-point Likert-scale to allow more fine-grained annotations. The obtained labels are translated into scores (5..1) and the average of all scores for a proposition is used as an estimate for its importance. This follows the idea that while single workers might find the task subjective, the consensus of multiple workers, represented in the average score, tends to be less subjective due to the “wisdom of the crowd”. We randomly group five propositions into a task.
As an alternative, we use a second task design based on pairwise comparisons. Comparisons are known to be easier to make and more consistent BIBREF32 , but also more expensive, as the number of pairs grows quadratically with the number of objects. To reduce the cost, we group five propositions into a task and ask workers to order them by importance per drag-and-drop. From the results, we derive pairwise comparisons and use TrueSkill BIBREF35 , a powerful Bayesian rank induction model BIBREF34 , to obtain importance estimates for each proposition.
Pilot Study
To verify the proposed approach, we conducted a pilot study on Amazon Mechanical Turk using data from TAC2008 BIBREF36 . We collected importance estimates for 474 propositions extracted from the first three clusters using both task designs. Each Likert-scale task was assigned to 5 different workers and awarded $0.06. For comparison tasks, we also collected 5 labels each, paid $0.05 and sampled around 7% of all possible pairs. We submitted them in batches of 100 pairs and selected pairs for subsequent batches based on the confidence of the TrueSkill model.
Following the observations of Lloret.2013, we established several measures for quality control. First, we restricted our tasks to workers from the US with an approval rate of at least 95%. Second, we identified low quality workers by measuring the correlation of each worker's Likert-scores with the average of the other four scores. The worst workers (at most 5% of all labels) were removed.
In addition, we included trap sentences, similar as in BIBREF13 , in around 80 of the tasks. In contrast to Lloret et al.'s findings, both an obvious trap sentence (This sentence is not important) and a less obvious but unimportant one (Barack Obama graduated from Harvard Law) were consistently labeled as unimportant (1.08 and 1.14), indicating that the workers did the task properly.
For Likert-scale tasks, we follow Snow.2008 and calculate agreement as the average Pearson correlation of a worker's Likert-score with the average score of the remaining workers. This measure is less strict than exact label agreement and can account for close labels and high- or low-scoring workers. We observe a correlation of 0.81, indicating substantial agreement. For comparisons, the majority agreement is 0.73. To further examine the reliability of the collected data, we followed the approach of Kiritchenko.2016 and simply repeated the crowdsourcing for one of the three topics. Between the importance estimates calculated from the first and second run, we found a Pearson correlation of 0.82 (Spearman 0.78) for Likert-scale tasks and 0.69 (Spearman 0.66) for comparison tasks. This shows that the approach, despite the subjectiveness of the task, allows us to collect reliable annotations.
In addition to the reliability studies, we extrinsically evaluated the annotations in the task of summary evaluation. For each of the 58 peer summaries in TAC2008, we calculated a score as the sum of the importance estimates of the propositions it contains. Table TABREF13 shows how these peer scores, averaged over the three topics, correlate with the manual responsiveness scores assigned during TAC in comparison to ROUGE-2 and Pyramid scores. The results demonstrate that with both task designs, we obtain importance annotations that are similarly useful for summary evaluation as pyramid annotations or gold-standard summaries (used for ROUGE).
Based on the pilot study, we conclude that the proposed crowdsourcing scheme allows us to obtain proper importance annotations for propositions. As workers are not required to read all documents, the annotation is much more efficient and scalable as with traditional methods.
Corpus Creation
This section presents the corpus construction process, as outlined in Figure FIGREF16 , combining automatic preprocessing, scalable crowdsourcing and high-quality expert annotations to be able to scale to the size of our document clusters. For every topic, we spent about $150 on crowdsourcing and 1.5h of expert annotations, while just a single annotator would already need over 8 hours (at 200 words per minute) to read all documents of a topic.
Source Data
As a starting point, we used the DIP corpus BIBREF37 , a collection of 49 clusters of 100 web pages on educational topics (e.g. bullying, homeschooling, drugs) with a short description of each topic. It was created from a large web crawl using state-of-the-art information retrieval. We selected 30 of the topics for which we created the necessary concept map annotations.
Proposition Extraction
As concept maps consist of propositions expressing the relation between concepts (see Figure FIGREF2 ), we need to impose such a structure upon the plain text in the document clusters. This could be done by manually annotating spans representing concepts and relations, however, the size of our clusters makes this a huge effort: 2288 sentences per topic (69k in total) need to be processed. Therefore, we resort to an automatic approach.
The Open Information Extraction paradigm BIBREF38 offers a representation very similar to the desired one. For instance, from
Students with bad credit history should not lose hope and apply for federal loans with the FAFSA.
Open IE systems extract tuples of two arguments and a relation phrase representing propositions:
(s. with bad credit history, should not lose, hope)
(s. with bad credit history, apply for, federal loans with the FAFSA)
While the relation phrase is similar to a relation in a concept map, many arguments in these tuples represent useful concepts. We used Open IE 4, a state-of-the-art system BIBREF39 to process all sentences. After removing duplicates, we obtained 4137 tuples per topic.
Since we want to create a gold-standard corpus, we have to ensure that we produce high-quality data. We therefore made use of the confidence assigned to every extracted tuple to filter out low quality ones. To ensure that we do not filter too aggressively (and miss important aspects in the final summary), we manually annotated 500 tuples sampled from all topics for correctness. On the first 250 of them, we tuned the filter threshold to 0.5, which keeps 98.7% of the correct extractions in the unseen second half. After filtering, a topic had on average 2850 propositions (85k in total).
Proposition Filtering
Despite the similarity of the Open IE paradigm, not every extracted tuple is a suitable proposition for a concept map. To reduce the effort in the subsequent steps, we therefore want to filter out unsuitable ones. A tuple is suitable if it (1) is a correct extraction, (2) is meaningful without any context and (3) has arguments that represent proper concepts. We created a guideline explaining when to label a tuple as suitable for a concept map and performed a small annotation study. Three annotators independently labeled 500 randomly sampled tuples. The agreement was 82% ( INLINEFORM0 ). We found tuples to be unsuitable mostly because they had unresolvable pronouns, conflicting with (2), or arguments that were full clauses or propositions, conflicting with (3), while (1) was mostly taken care of by the confidence filtering in § SECREF21 .
Due to the high number of tuples we decided to automate the filtering step. We trained a linear SVM on the majority voted annotations. As features, we used the extraction confidence, length of arguments and relations as well as part-of-speech tags, among others. To ensure that the automatic classification does not remove suitable propositions, we tuned the classifier to avoid false negatives. In particular, we introduced class weights, improving precision on the negative class at the cost of a higher fraction of positive classifications. Additionally, we manually verified a certain number of the most uncertain negative classifications to further improve performance. When 20% of the classifications are manually verified and corrected, we found that our model trained on 350 labeled instances achieves 93% precision on negative classifications on the unseen 150 instances. We found this to be a reasonable trade-off of automation and data quality and applied the model to the full dataset.
The classifier filtered out 43% of the propositions, leaving 1622 per topic. We manually examined the 17k least confident negative classifications and corrected 955 of them. We also corrected positive classifications for certain types of tuples for which we knew the classifier to be imprecise. Finally, each topic was left with an average of 1554 propositions (47k in total).
Importance Annotation
Given the propositions identified in the previous step, we now applied our crowdsourcing scheme as described in § SECREF4 to determine their importance. To cope with the large number of propositions, we combine the two task designs: First, we collect Likert-scores from 5 workers for each proposition, clean the data and calculate average scores. Then, using only the top 100 propositions according to these scores, we crowdsource 10% of all possible pairwise comparisons among them. Using TrueSkill, we obtain a fine-grained ranking of the 100 most important propositions.
For Likert-scores, the average agreement over all topics is 0.80, while the majority agreement for comparisons is 0.78. We repeated the data collection for three randomly selected topics and found the Pearson correlation between both runs to be 0.73 (Spearman 0.73) for Likert-scores and 0.72 (Spearman 0.71) for comparisons. These figures show that the crowdsourcing approach works on this dataset as reliably as on the TAC documents.
In total, we uploaded 53k scoring and 12k comparison tasks to Mechanical Turk, spending $4425.45 including fees. From the fine-grained ranking of the 100 most important propositions, we select the top 50 per topic to construct a summary concept map in the subsequent steps.
Proposition Revision
Having a manageable number of propositions, an annotator then applied a few straightforward transformations that correct common errors of the Open IE system. First, we break down propositions with conjunctions in either of the arguments into separate propositions per conjunct, which the Open IE system sometimes fails to do. And second, we correct span errors that might occur in the argument or relation phrases, especially when sentences were not properly segmented. As a result, we have a set of high quality propositions for our concept map, consisting of, due to the first transformation, 56.1 propositions per topic on average.
Concept Map Construction
In this final step, we connect the set of important propositions to form a graph. For instance, given the following two propositions
(student, may borrow, Stafford Loan)
(the student, does not have, a credit history)
one can easily see, although the first arguments differ slightly, that both labels describe the concept student, allowing us to build a concept map with the concepts student, Stafford Loan and credit history. The annotation task thus involves deciding which of the available propositions to include in the map, which of their concepts to merge and, when merging, which of the available labels to use. As these decisions highly depend upon each other and require context, we decided to use expert annotators rather than crowdsource the subtasks.
Annotators were given the topic description and the most important, ranked propositions. Using a simple annotation tool providing a visualization of the graph, they could connect the propositions step by step. They were instructed to reach a size of 25 concepts, the recommended maximum size for a concept map BIBREF6 . Further, they should prefer more important propositions and ensure connectedness. When connecting two propositions, they were asked to keep the concept label that was appropriate for both propositions. To support the annotators, the tool used ADW BIBREF40 , a state-of-the-art approach for semantic similarity, to suggest possible connections. The annotation was carried out by graduate students with a background in NLP after receiving an introduction into the guidelines and tool and annotating a first example.
If an annotator was not able to connect 25 concepts, she was allowed to create up to three synthetic relations with freely defined labels, making the maps slightly abstractive. On average, the constructed maps have 0.77 synthetic relations, mostly connecting concepts whose relation is too obvious to be explicitly stated in text (e.g. between Montessori teacher and Montessori education).
To assess the reliability of this annotation step, we had the first three maps created by two annotators. We casted the task of selecting propositions to be included in the map as a binary decision task and observed an agreement of 84% ( INLINEFORM0 ). Second, we modeled the decision which concepts to join as a binary decision on all pairs of common concepts, observing an agreement of 95% ( INLINEFORM1 ). And finally, we compared which concept labels the annotators decided to include in the final map, observing 85% ( INLINEFORM2 ) agreement. Hence, the annotation shows substantial agreement BIBREF41 .
Corpus Analysis
In this section, we describe our newly created corpus, which, in addition to having summaries in the form of concept maps, differs from traditional summarization corpora in several aspects.
Document Clusters
The corpus consists of document clusters for 30 different topics. Each of them contains around 40 documents with on average 2413 tokens, which leads to an average cluster size of 97,880 token. With these characteristics, the document clusters are 15 times larger than typical DUC clusters of ten documents and five times larger than the 25-document-clusters (Table TABREF26 ). In addition, the documents are also more variable in terms of length, as the (length-adjusted) standard deviation is twice as high as in the other corpora. With these properties, the corpus represents an interesting challenge towards real-world application scenarios, in which users typically have to deal with much more than ten documents.
Because we used a large web crawl as the source for our corpus, it contains documents from a variety of genres. To further analyze this property, we categorized a sample of 50 documents from the corpus. Among them, we found professionally written articles and blog posts (28%), educational material for parents and kids (26%), personal blog posts (16%), forum discussions and comments (12%), commented link collections (12%) and scientific articles (6%).
In addition to the variety of genres, the documents also differ in terms of language use. To capture this property, we follow Zopf.2016 and compute, for every topic, the average Jensen-Shannon divergence between the word distribution of one document and the word distribution in the remaining documents. The higher this value is, the more the language differs between documents. We found the average divergence over all topics to be 0.3490, whereas it is 0.3019 in DUC 2004 and 0.3188 in TAC 2008A.
Concept Maps
As Table TABREF33 shows, each of the 30 reference concept maps has exactly 25 concepts and between 24 and 28 relations. Labels for both concepts and relations consist on average of 3.2 tokens, whereas the latter are a bit shorter in characters.
To obtain a better picture of what kind of text spans have been used as labels, we automatically tagged them with their part-of-speech and determined their head with a dependency parser. Concept labels tend to be headed by nouns (82%) or verbs (15%), while they also contain adjectives, prepositions and determiners. Relation labels, on the other hand, are almost always headed by a verb (94%) and contain prepositions, nouns and particles in addition. These distributions are very similar to those reported by Villalon.2010 for their (single-document) concept map corpus.
Analyzing the graph structure of the maps, we found that all of them are connected. They have on average 7.2 central concepts with more than one relation, while the remaining ones occur in only one proposition. We found that achieving a higher number of connections would mean compromising importance, i.e. including less important propositions, and decided against it.
Baseline Experiments
In this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus.
Conclusion
In this work, we presented low-context importance annotation, a novel crowdsourcing scheme that we used to create a new benchmark corpus for concept-map-based MDS. The corpus has large-scale document clusters of heterogeneous web documents, posing a challenging summarization task. Together with the corpus, we provide implementations of a baseline method and evaluation scripts and hope that our efforts facilitate future research on this variant of summarization.
Acknowledgments
We would like to thank Teresa Botschen, Andreas Hanselowski and Markus Zopf for their help with the annotation work and Christian Meyer for his valuable feedback. This work has been supported by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1.
|
What type of evaluation is proposed for this task?
|
Answer with content missing: (Evaluation Metrics section) Precision, Recall, F1-scores, Strict match, METEOR, ROUGE-2
| 4,263
|
qasper
|
8k
|
Introduction
Language model pretraining has advanced the state of the art in many NLP tasks ranging from sentiment analysis, to question answering, natural language inference, named entity recognition, and textual similarity. State-of-the-art pretrained models include ELMo BIBREF1, GPT BIBREF2, and more recently Bidirectional Encoder Representations from Transformers (Bert; BIBREF0). Bert combines both word and sentence representations in a single very large Transformer BIBREF3; it is pretrained on vast amounts of text, with an unsupervised objective of masked language modeling and next-sentence prediction and can be fine-tuned with various task-specific objectives.
In most cases, pretrained language models have been employed as encoders for sentence- and paragraph-level natural language understanding problems BIBREF0 involving various classification tasks (e.g., predicting whether any two sentences are in an entailment relationship; or determining the completion of a sentence among four alternative sentences). In this paper, we examine the influence of language model pretraining on text summarization. Different from previous tasks, summarization requires wide-coverage natural language understanding going beyond the meaning of individual words and sentences. The aim is to condense a document into a shorter version while preserving most of its meaning. Furthermore, under abstractive modeling formulations, the task requires language generation capabilities in order to create summaries containing novel words and phrases not featured in the source text, while extractive summarization is often defined as a binary classification task with labels indicating whether a text span (typically a sentence) should be included in the summary.
We explore the potential of Bert for text summarization under a general framework encompassing both extractive and abstractive modeling paradigms. We propose a novel document-level encoder based on Bert which is able to encode a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers to capture document-level features for extracting sentences. Our abstractive model adopts an encoder-decoder architecture, combining the same pretrained Bert encoder with a randomly-initialized Transformer decoder BIBREF3. We design a new training schedule which separates the optimizers of the encoder and the decoder in order to accommodate the fact that the former is pretrained while the latter must be trained from scratch. Finally, motivated by previous work showing that the combination of extractive and abstractive objectives can help generate better summaries BIBREF4, we present a two-stage approach where the encoder is fine-tuned twice, first with an extractive objective and subsequently on the abstractive summarization task.
We evaluate the proposed approach on three single-document news summarization datasets representative of different writing conventions (e.g., important information is concentrated at the beginning of the document or distributed more evenly throughout) and summary styles (e.g., verbose vs. more telegraphic; extractive vs. abstractive). Across datasets, we experimentally show that the proposed models achieve state-of-the-art results under both extractive and abstractive settings. Our contributions in this work are three-fold: a) we highlight the importance of document encoding for the summarization task; a variety of recently proposed techniques aim to enhance summarization performance via copying mechanisms BIBREF5, BIBREF6, BIBREF7, reinforcement learning BIBREF8, BIBREF9, BIBREF10, and multiple communicating encoders BIBREF11. We achieve better results with a minimum-requirement model without using any of these mechanisms; b) we showcase ways to effectively employ pretrained language models in summarization under both extractive and abstractive settings; we would expect any improvements in model pretraining to translate in better summarization in the future; and c) the proposed models can be used as a stepping stone to further improve summarization performance as well as baselines against which new proposals are tested.
Background ::: Pretrained Language Models
Pretrained language models BIBREF1, BIBREF2, BIBREF0, BIBREF12, BIBREF13 have recently emerged as a key technology for achieving impressive gains in a wide variety of natural language tasks. These models extend the idea of word embeddings by learning contextual representations from large-scale corpora using a language modeling objective. Bidirectional Encoder Representations from Transformers (Bert; BIBREF0) is a new language representation model which is trained with a masked language modeling and a “next sentence prediction” task on a corpus of 3,300M words.
The general architecture of Bert is shown in the left part of Figure FIGREF2. Input text is first preprocessed by inserting two special tokens. [cls] is appended to the beginning of the text; the output representation of this token is used to aggregate information from the whole sequence (e.g., for classification tasks). And token [sep] is inserted after each sentence as an indicator of sentence boundaries. The modified text is then represented as a sequence of tokens $X=[w_1,w_2,\cdots ,w_n]$. Each token $w_i$ is assigned three kinds of embeddings: token embeddings indicate the meaning of each token, segmentation embeddings are used to discriminate between two sentences (e.g., during a sentence-pair classification task) and position embeddings indicate the position of each token within the text sequence. These three embeddings are summed to a single input vector $x_i$ and fed to a bidirectional Transformer with multiple layers:
where $h^0=x$ are the input vectors; $\mathrm {LN}$ is the layer normalization operation BIBREF14; $\mathrm {MHAtt}$ is the multi-head attention operation BIBREF3; superscript $l$ indicates the depth of the stacked layer. On the top layer, Bert will generate an output vector $t_i$ for each token with rich contextual information.
Pretrained language models are usually used to enhance performance in language understanding tasks. Very recently, there have been attempts to apply pretrained models to various generation problems BIBREF15, BIBREF16. When fine-tuning for a specific task, unlike ELMo whose parameters are usually fixed, parameters in Bert are jointly fine-tuned with additional task-specific parameters.
Background ::: Extractive Summarization
Extractive summarization systems create a summary by identifying (and subsequently concatenating) the most important sentences in a document. Neural models consider extractive summarization as a sentence classification problem: a neural encoder creates sentence representations and a classifier predicts which sentences should be selected as summaries. SummaRuNNer BIBREF7 is one of the earliest neural approaches adopting an encoder based on Recurrent Neural Networks. Refresh BIBREF8 is a reinforcement learning-based system trained by globally optimizing the ROUGE metric. More recent work achieves higher performance with more sophisticated model structures. Latent BIBREF17 frames extractive summarization as a latent variable inference problem; instead of maximizing the likelihood of “gold” standard labels, their latent model directly maximizes the likelihood of human summaries given selected sentences. Sumo BIBREF18 capitalizes on the notion of structured attention to induce a multi-root dependency tree representation of the document while predicting the output summary. NeuSum BIBREF19 scores and selects sentences jointly and represents the state of the art in extractive summarization.
Background ::: Abstractive Summarization
Neural approaches to abstractive summarization conceptualize the task as a sequence-to-sequence problem, where an encoder maps a sequence of tokens in the source document $\mathbf {x} = [x_1, ..., x_n]$ to a sequence of continuous representations $\mathbf {z} = [z_1, ..., z_n]$, and a decoder then generates the target summary $\mathbf {y} = [y_1, ..., y_m]$ token-by-token, in an auto-regressive manner, hence modeling the conditional probability: $p(y_1, ..., y_m|x_1, ..., x_n)$.
BIBREF20 and BIBREF21 were among the first to apply the neural encoder-decoder architecture to text summarization. BIBREF6 enhance this model with a pointer-generator network (PTgen) which allows it to copy words from the source text, and a coverage mechanism (Cov) which keeps track of words that have been summarized. BIBREF11 propose an abstractive system where multiple agents (encoders) represent the document together with a hierarchical attention mechanism (over the agents) for decoding. Their Deep Communicating Agents (DCA) model is trained end-to-end with reinforcement learning. BIBREF9 also present a deep reinforced model (DRM) for abstractive summarization which handles the coverage problem with an intra-attention mechanism where the decoder attends over previously generated words. BIBREF4 follow a bottom-up approach (BottomUp); a content selector first determines which phrases in the source document should be part of the summary, and a copy mechanism is applied only to preselected phrases during decoding. BIBREF22 propose an abstractive model which is particularly suited to extreme summarization (i.e., single sentence summaries), based on convolutional neural networks and additionally conditioned on topic distributions (TConvS2S).
Fine-tuning Bert for Summarization ::: Summarization Encoder
Although Bert has been used to fine-tune various NLP tasks, its application to summarization is not as straightforward. Since Bert is trained as a masked-language model, the output vectors are grounded to tokens instead of sentences, while in extractive summarization, most models manipulate sentence-level representations. Although segmentation embeddings represent different sentences in Bert, they only apply to sentence-pair inputs, while in summarization we must encode and manipulate multi-sentential inputs. Figure FIGREF2 illustrates our proposed Bert architecture for Summarization (which we call BertSum).
In order to represent individual sentences, we insert external [cls] tokens at the start of each sentence, and each [cls] symbol collects features for the sentence preceding it. We also use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we assign segment embedding $E_A$ or $E_B$ depending on whether $i$ is odd or even. For example, for document $[sent_1, sent_2, sent_3, sent_4, sent_5]$, we would assign embeddings $[E_A, E_B, E_A,E_B, E_A]$. This way, document representations are learned hierarchically where lower Transformer layers represent adjacent sentences, while higher layers, in combination with self-attention, represent multi-sentence discourse.
Position embeddings in the original Bert model have a maximum length of 512; we overcome this limitation by adding more position embeddings that are initialized randomly and fine-tuned with other parameters in the encoder.
Fine-tuning Bert for Summarization ::: Extractive Summarization
Let $d$ denote a document containing sentences $[sent_1, sent_2, \cdots , sent_m]$, where $sent_i$ is the $i$-th sentence in the document. Extractive summarization can be defined as the task of assigning a label $y_i \in \lbrace 0, 1\rbrace $ to each $sent_i$, indicating whether the sentence should be included in the summary. It is assumed that summary sentences represent the most important content of the document.
With BertSum, vector $t_i$ which is the vector of the $i$-th [cls] symbol from the top layer can be used as the representation for $sent_i$. Several inter-sentence Transformer layers are then stacked on top of Bert outputs, to capture document-level features for extracting summaries:
where $h^0=\mathrm {PosEmb}(T)$; $T$ denotes the sentence vectors output by BertSum, and function $\mathrm {PosEmb}$ adds sinusoid positional embeddings BIBREF3 to $T$, indicating the position of each sentence.
The final output layer is a sigmoid classifier:
where $h^L_i$ is the vector for $sent_i$ from the top layer (the $L$-th layer ) of the Transformer. In experiments, we implemented Transformers with $L=1, 2, 3$ and found that a Transformer with $L=2$ performed best. We name this model BertSumExt.
The loss of the model is the binary classification entropy of prediction $\hat{y}_i$ against gold label $y_i$. Inter-sentence Transformer layers are jointly fine-tuned with BertSum. We use the Adam optimizer with $\beta _1=0.9$, and $\beta _2=0.999$). Our learning rate schedule follows BIBREF3 with warming-up ($ \operatorname{\operatorname{warmup}}=10,000$):
Fine-tuning Bert for Summarization ::: Abstractive Summarization
We use a standard encoder-decoder framework for abstractive summarization BIBREF6. The encoder is the pretrained BertSum and the decoder is a 6-layered Transformer initialized randomly. It is conceivable that there is a mismatch between the encoder and the decoder, since the former is pretrained while the latter must be trained from scratch. This can make fine-tuning unstable; for example, the encoder might overfit the data while the decoder underfits, or vice versa. To circumvent this, we design a new fine-tuning schedule which separates the optimizers of the encoder and the decoder.
We use two Adam optimizers with $\beta _1=0.9$ and $\beta _2=0.999$ for the encoder and the decoder, respectively, each with different warmup-steps and learning rates:
where $\tilde{lr}_{\mathcal {E}}=2e^{-3}$, and $\operatorname{\operatorname{warmup}}_{\mathcal {E}}=20,000$ for the encoder and $\tilde{lr}_{\mathcal {D}}=0.1$, and $\operatorname{\operatorname{warmup}}_{\mathcal {D}}=10,000$ for the decoder. This is based on the assumption that the pretrained encoder should be fine-tuned with a smaller learning rate and smoother decay (so that the encoder can be trained with more accurate gradients when the decoder is becoming stable).
In addition, we propose a two-stage fine-tuning approach, where we first fine-tune the encoder on the extractive summarization task (Section SECREF8) and then fine-tune it on the abstractive summarization task (Section SECREF13). Previous work BIBREF4, BIBREF23 suggests that using extractive objectives can boost the performance of abstractive summarization. Also notice that this two-stage approach is conceptually very simple, the model can take advantage of information shared between these two tasks, without fundamentally changing its architecture. We name the default abstractive model BertSumAbs and the two-stage fine-tuned model BertSumExtAbs.
Experimental Setup
In this section, we describe the summarization datasets used in our experiments and discuss various implementation details.
Experimental Setup ::: Summarization Datasets
We evaluated our model on three benchmark datasets, namely the CNN/DailyMail news highlights dataset BIBREF24, the New York Times Annotated Corpus (NYT; BIBREF25), and XSum BIBREF22. These datasets represent different summary styles ranging from highlights to very brief one sentence summaries. The summaries also vary with respect to the type of rewriting operations they exemplify (e.g., some showcase more cut and paste operations while others are genuinely abstractive). Table TABREF12 presents statistics on these datasets (test set); example (gold-standard) summaries are provided in the supplementary material.
Experimental Setup ::: Summarization Datasets ::: CNN/DailyMail
contains news articles and associated highlights, i.e., a few bullet points giving a brief overview of the article. We used the standard splits of BIBREF24 for training, validation, and testing (90,266/1,220/1,093 CNN documents and 196,961/12,148/10,397 DailyMail documents). We did not anonymize entities. We first split sentences with the Stanford CoreNLP toolkit BIBREF26 and pre-processed the dataset following BIBREF6. Input documents were truncated to 512 tokens.
Experimental Setup ::: Summarization Datasets ::: NYT
contains 110,540 articles with abstractive summaries. Following BIBREF27, we split these into 100,834/9,706 training/test examples, based on the date of publication (the test set contains all articles published from January 1, 2007 onward). We used 4,000 examples from the training as validation set. We also followed their filtering procedure, documents with summaries less than 50 words were removed from the dataset. The filtered test set (NYT50) includes 3,452 examples. Sentences were split with the Stanford CoreNLP toolkit BIBREF26 and pre-processed following BIBREF27. Input documents were truncated to 800 tokens.
Experimental Setup ::: Summarization Datasets ::: XSum
contains 226,711 news articles accompanied with a one-sentence summary, answering the question “What is this article about?”. We used the splits of BIBREF22 for training, validation, and testing (204,045/11,332/11,334) and followed the pre-processing introduced in their work. Input documents were truncated to 512 tokens.
Aside from various statistics on the three datasets, Table TABREF12 also reports the proportion of novel bi-grams in gold summaries as a measure of their abstractiveness. We would expect models with extractive biases to perform better on datasets with (mostly) extractive summaries, and abstractive models to perform more rewrite operations on datasets with abstractive summaries. CNN/DailyMail and NYT are somewhat abstractive, while XSum is highly abstractive.
Experimental Setup ::: Implementation Details
For both extractive and abstractive settings, we used PyTorch, OpenNMT BIBREF28 and the `bert-base-uncased' version of Bert to implement BertSum. Both source and target texts were tokenized with Bert's subwords tokenizer.
Experimental Setup ::: Implementation Details ::: Extractive Summarization
All extractive models were trained for 50,000 steps on 3 GPUs (GTX 1080 Ti) with gradient accumulation every two steps. Model checkpoints were saved and evaluated on the validation set every 1,000 steps. We selected the top-3 checkpoints based on the evaluation loss on the validation set, and report the averaged results on the test set. We used a greedy algorithm similar to BIBREF7 to obtain an oracle summary for each document to train extractive models. The algorithm generates an oracle consisting of multiple sentences which maximize the ROUGE-2 score against the gold summary.
When predicting summaries for a new document, we first use the model to obtain the score for each sentence. We then rank these sentences by their scores from highest to lowest, and select the top-3 sentences as the summary.
During sentence selection we use Trigram Blocking to reduce redundancy BIBREF9. Given summary $S$ and candidate sentence $c$, we skip $c$ if there exists a trigram overlapping between $c$ and $S$. The intuition is similar to Maximal Marginal Relevance (MMR; BIBREF29); we wish to minimize the similarity between the sentence being considered and sentences which have been already selected as part of the summary.
Experimental Setup ::: Implementation Details ::: Abstractive Summarization
In all abstractive models, we applied dropout (with probability $0.1$) before all linear layers; label smoothing BIBREF30 with smoothing factor $0.1$ was also used. Our Transformer decoder has 768 hidden units and the hidden size for all feed-forward layers is 2,048. All models were trained for 200,000 steps on 4 GPUs (GTX 1080 Ti) with gradient accumulation every five steps. Model checkpoints were saved and evaluated on the validation set every 2,500 steps. We selected the top-3 checkpoints based on their evaluation loss on the validation set, and report the averaged results on the test set.
During decoding we used beam search (size 5), and tuned the $\alpha $ for the length penalty BIBREF31 between $0.6$ and 1 on the validation set; we decode until an end-of-sequence token is emitted and repeated trigrams are blocked BIBREF9. It is worth noting that our decoder applies neither a copy nor a coverage mechanism BIBREF6, despite their popularity in abstractive summarization. This is mainly because we focus on building a minimum-requirements model and these mechanisms may introduce additional hyper-parameters to tune. Thanks to the subwords tokenizer, we also rarely observe issues with out-of-vocabulary words in the output; moreover, trigram-blocking produces diverse summaries managing to reduce repetitions.
Results ::: Automatic Evaluation
We evaluated summarization quality automatically using ROUGE BIBREF32. We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. Table TABREF23 summarizes our results on the CNN/DailyMail dataset. The first block in the table includes the results of an extractive Oracle system as an upper bound. We also present the Lead-3 baseline (which simply selects the first three sentences in a document). The second block in the table includes various extractive models trained on the CNN/DailyMail dataset (see Section SECREF5 for an overview). For comparison to our own model, we also implemented a non-pretrained Transformer baseline (TransformerExt) which uses the same architecture as BertSumExt, but with fewer parameters. It is randomly initialized and only trained on the summarization task. TransformerExt has 6 layers, the hidden size is 512, and the feed-forward filter size is 2,048. The model was trained with same settings as in BIBREF3. The third block in Table TABREF23 highlights the performance of several abstractive models on the CNN/DailyMail dataset (see Section SECREF6 for an overview). We also include an abstractive Transformer baseline (TransformerAbs) which has the same decoder as our abstractive BertSum models; the encoder is a 6-layer Transformer with 768 hidden size and 2,048 feed-forward filter size. The fourth block reports results with fine-tuned Bert models: BertSumExt and its two variants (one without interval embeddings, and one with the large version of Bert), BertSumAbs, and BertSumExtAbs. Bert-based models outperform the Lead-3 baseline which is not a strawman; on the CNN/DailyMail corpus it is indeed superior to several extractive BIBREF7, BIBREF8, BIBREF19 and abstractive models BIBREF6. Bert models collectively outperform all previously proposed extractive and abstractive systems, only falling behind the Oracle upper bound. Among Bert variants, BertSumExt performs best which is not entirely surprising; CNN/DailyMail summaries are somewhat extractive and even abstractive models are prone to copying sentences from the source document when trained on this dataset BIBREF6. Perhaps unsurprisingly we observe that larger versions of Bert lead to performance improvements and that interval embeddings bring only slight gains. Table TABREF24 presents results on the NYT dataset. Following the evaluation protocol in BIBREF27, we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries. Again, we report the performance of the Oracle upper bound and Lead-3 baseline. The second block in the table contains previously proposed extractive models as well as our own Transformer baseline. Compress BIBREF27 is an ILP-based model which combines compression and anaphoricity constraints. The third block includes abstractive models from the literature, and our Transformer baseline. Bert-based models are shown in the fourth block. Again, we observe that they outperform previously proposed approaches. On this dataset, abstractive Bert models generally perform better compared to BertSumExt, almost approaching Oracle performance.
Table TABREF26 summarizes our results on the XSum dataset. Recall that summaries in this dataset are highly abstractive (see Table TABREF12) consisting of a single sentence conveying the gist of the document. Extractive models here perform poorly as corroborated by the low performance of the Lead baseline (which simply selects the leading sentence from the document), and the Oracle (which selects a single-best sentence in each document) in Table TABREF26. As a result, we do not report results for extractive models on this dataset. The second block in Table TABREF26 presents the results of various abstractive models taken from BIBREF22 and also includes our own abstractive Transformer baseline. In the third block we show the results of our Bert summarizers which again are superior to all previously reported models (by a wide margin).
Results ::: Model Analysis ::: Learning Rates
Recall that our abstractive model uses separate optimizers for the encoder and decoder. In Table TABREF27 we examine whether the combination of different learning rates ($\tilde{lr}_{\mathcal {E}}$ and $\tilde{lr}_{\mathcal {D}}$) is indeed beneficial. Specifically, we report model perplexity on the CNN/DailyMail validation set for varying encoder/decoder learning rates. We can see that the model performs best with $\tilde{lr}_{\mathcal {E}}=2e-3$ and $\tilde{lr}_{\mathcal {D}}=0.1$.
Results ::: Model Analysis ::: Position of Extracted Sentences
In addition to the evaluation based on ROUGE, we also analyzed in more detail the summaries produced by our model. For the extractive setting, we looked at the position (in the source document) of the sentences which were selected to appear in the summary. Figure FIGREF31 shows the proportion of selected summary sentences which appear in the source document at positions 1, 2, and so on. The analysis was conducted on the CNN/DailyMail dataset for Oracle summaries, and those produced by BertSumExt and the TransformerExt. We can see that Oracle summary sentences are fairly smoothly distributed across documents, while summaries created by TransformerExt mostly concentrate on the first document sentences. BertSumExt outputs are more similar to Oracle summaries, indicating that with the pretrained encoder, the model relies less on shallow position features, and learns deeper document representations.
Results ::: Model Analysis ::: Novel N-grams
We also analyzed the output of abstractive systems by calculating the proportion of novel n-grams that appear in the summaries but not in the source texts. The results are shown in Figure FIGREF33. In the CNN/DailyMail dataset, the proportion of novel n-grams in automatically generated summaries is much lower compared to reference summaries, but in XSum, this gap is much smaller. We also observe that on CNN/DailyMail, BertExtAbs produces less novel n-ngrams than BertAbs, which is not surprising. BertExtAbs is more biased towards selecting sentences from the source document since it is initially trained as an extractive model. The supplementary material includes examples of system output and additional ablation studies.
Results ::: Human Evaluation
In addition to automatic evaluation, we also evaluated system output by eliciting human judgments. We report experiments following a question-answering (QA) paradigm BIBREF33, BIBREF8 which quantifies the degree to which summarization models retain key information from the document. Under this paradigm, a set of questions is created based on the gold summary under the assumption that it highlights the most important document content. Participants are then asked to answer these questions by reading system summaries alone without access to the article. The more questions a system can answer, the better it is at summarizing the document as a whole. Moreover, we also assessed the overall quality of the summaries produced by abstractive systems which due to their ability to rewrite content may produce disfluent or ungrammatical output. Specifically, we followed the Best-Worst Scaling BIBREF34 method where participants were presented with the output of two systems (and the original document) and asked to decide which one was better according to the criteria of Informativeness, Fluency, and Succinctness.
Both types of evaluation were conducted on the Amazon Mechanical Turk platform. For the CNN/DailyMail and NYT datasets we used the same documents (20 in total) and questions from previous work BIBREF8, BIBREF18. For XSum, we randomly selected 20 documents (and their questions) from the release of BIBREF22. We elicited 3 responses per HIT. With regard to QA evaluation, we adopted the scoring mechanism from BIBREF33; correct answers were marked with a score of one, partially correct answers with 0.5, and zero otherwise. For quality-based evaluation, the rating of each system was computed as the percentage of times it was chosen as better minus the times it was selected as worse. Ratings thus range from -1 (worst) to 1 (best).
Results for extractive and abstractive systems are shown in Tables TABREF37 and TABREF38, respectively. We compared the best performing BertSum model in each setting (extractive or abstractive) against various state-of-the-art systems (whose output is publicly available), the Lead baseline, and the Gold standard as an upper bound. As shown in both tables participants overwhelmingly prefer the output of our model against comparison systems across datasets and evaluation paradigms. All differences between BertSum and comparison models are statistically significant ($p<0.05$), with the exception of TConvS2S (see Table TABREF38; XSum) in the QA evaluation setting.
Conclusions
In this paper, we showcased how pretrained Bert can be usefully applied in text summarization. We introduced a novel document-level encoder and proposed a general framework for both abstractive and extractive summarization. Experimental results across three datasets show that our model achieves state-of-the-art results across the board under automatic and human-based evaluation protocols. Although we mainly focused on document encoding for summarization, in the future, we would like to take advantage the capabilities of Bert for language generation.
Acknowledgments
This research is supported by a Google PhD Fellowship to the first author. We gratefully acknowledge the support of the European Research Council (Lapata, award number 681760, “Translating Multiple Modalities into Text”). We would also like to thank Shashi Narayan for providing us with the XSum dataset.
|
What are the datasets used for evaluation?
|
CNN/DailyMail news highlights, New York Times Annotated Corpus, XSum
| 4,369
|
qasper
|
8k
|
Introduction
Since humans amass more and more generally available data in the form of unstructured text it would be very useful to teach machines to read and comprehend such data and then use this understanding to answer our questions. A significant amount of research has recently focused on answering one particular kind of questions the answer to which depends on understanding a context document. These are cloze-style questions BIBREF0 which require the reader to fill in a missing word in a sentence. An important advantage of such questions is that they can be generated automatically from a suitable text corpus which allows us to produce a practically unlimited amount of them. That opens the task to notoriously data-hungry deep-learning techniques which now seem to outperform all alternative approaches.
Two such large-scale datasets have recently been proposed by researchers from Google DeepMind and Facebook AI: the CNN/Daily Mail dataset BIBREF1 and the Children's Book Test (CBT) BIBREF2 respectively. These have attracted a lot of attention from the research community BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 with a new state-of-the-art model coming out every few weeks.
However if our goal is a production-level system actually capable of helping humans, we want the model to use all available resources as efficiently as possible. Given that
we believe that if the community is striving to bring the performance as far as possible, it should move its work to larger data.
This thinking goes in line with recent developments in the area of language modelling. For a long time models were being compared on several "standard" datasets with publications often presenting minuscule improvements in performance. Then the large-scale One Billion Word corpus dataset appeared BIBREF15 and it allowed Jozefowicz et al. to train much larger LSTM models BIBREF16 that almost halved the state-of-the-art perplexity on this dataset.
We think it is time to make a similar step in the area of text comprehension. Hence we are introducing the BookTest, a new dataset very similar to the Children's Book test but more than 60 times larger to enable training larger models even in the domain of text comprehension. Furthermore the methodology used to create our data can later be used to create even larger datasets when the need arises thanks to further technological progress.
We show that if we evaluate a model trained on the new dataset on the now standard Children's Book Test dataset, we see an improvement in accuracy much larger than other research groups achieved by enhancing the model architecture itself (while still using the original CBT training data). By training on the new dataset, we reduce the prediction error by almost one third. On the named-entity version of CBT this brings the ensemble of our models to the level of human baseline as reported by Facebook BIBREF2 . However in the final section we show in our own human study that there is still room for improvement on the CBT beyond the performance of our model.
Task Description
A natural way of testing a reader's comprehension of a text is to ask her a question the answer to which can be deduced from the text. Hence the task we are trying to solve consists of answering a cloze-style question, the answer to which depends on the understanding of a context document provided with the question. The model is also provided with a set of possible answers from which the correct one is to be selected. This can be formalized as follows:
The training data consist of tuples INLINEFORM0 , where INLINEFORM1 is a question, INLINEFORM2 is a document that contains the answer to question INLINEFORM3 , INLINEFORM4 is a set of possible answers and INLINEFORM5 is the ground-truth answer. Both INLINEFORM6 and INLINEFORM7 are sequences of words from vocabulary INLINEFORM8 . We also assume that all possible answers are words from the vocabulary, that is INLINEFORM9 . In the CBT and CNN/Daily Mail datasets it is also true that the ground-truth answer INLINEFORM10 appears in the document. This is exploited by many machine learning models BIBREF2 , BIBREF4 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF10 , BIBREF11 , BIBREF12 , however some do not explicitly depend on this property BIBREF1 , BIBREF3 , BIBREF5 , BIBREF9
Current Landscape
We will now briefly review what datasets for text comprehension have been published up to date and look at models which have been recently applied to solving the task we have just described.
Datasets
A crucial condition for applying deep-learning techniques is to have a huge amount of data available for training. For question answering this specifically means having a large number of document-question-answer triples available. While there is an unlimited amount of text available, coming up with relevant questions and the corresponding answers can be extremely labour-intensive if done by human annotators. There were efforts to provide such human-generated datasets, e.g. Microsoft's MCTest BIBREF17 , however their scale is not suitable for deep learning without pre-training on other data BIBREF18 (such as using pre-trained word embedding vectors).
Google DeepMind managed to avoid this scale issue with their way of generating document-question-answer triples automatically, closely followed by Facebook with a similar method. Let us now briefly introduce the two resulting datasets whose properties are summarized in Table TABREF8 .
These two datasets BIBREF1 exploit a useful feature of online news articles – many articles include a short summarizing sentence near the top of the page. Since all information in the summary sentence is also presented in the article body, we get a nice cloze-style question about the article contents by removing a word from the short summary.
The dataset's authors also replaced all named entities in the dataset by anonymous tokens which are further shuffled for each new batch. This forces the model to rely solely on information from the context document, not being able to transfer any meaning of the named entities between documents.
This restricts the task to one specific aspect of context-dependent question answering which may be useful however it moves the task further from the real application scenario, where we would like the model to use all information available to answer questions. Furthermore Chen et al. BIBREF5 have suggested that this can make about 17% of the questions unanswerable even by humans. They also claim that more than a half of the question sentences are mere paraphrases or exact matches of a single sentence from the context document. This raises a question to what extent the dataset can test deeper understanding of the articles.
The Children's Book Test BIBREF2 uses a different source - books freely available thanks to Project Gutenberg. Since no summary is available, each example consists of a context document formed from 20 consecutive sentences from the story together with a question formed from the subsequent sentence.
The dataset comes in four flavours depending on what type of word is omitted from the question sentence. Based on human evaluation done in BIBREF2 it seems that NE and CN are more context dependent than the other two types – prepositions and verbs. Therefore we (and all of the recent publications) focus only on these two word types.
Several new datasets related to the (now almost standard) ones above emerged recently. We will now briefly present them and explain how the dataset we are introducing in this article differs from them.
The LAMBADA dataset BIBREF19 is designed to measure progress in understanding common-sense questions about short stories that can be easily answered by humans but cannot be answered by current standard machine-learning models (e.g. plain LSTM language models). This dataset is useful for measuring the gap between humans and machine learning algorithms. However, by contrast to our BookTest dataset, it will not allow us to track progress towards the performance of the baseline systems or on examples where machine learning may show super-human performance. Also LAMBADA is just a diagnostic dataset and does not provide ready-to-use question-answering training data, just a plain-text corpus which may moreover include copyrighted books making its use potentially problematic for some purposes. We are providing ready training data consisting of copyright-free books only.
The SQuAD dataset BIBREF20 based on Wikipedia and the Who-did-What dataset BIBREF21 based on Gigaword news articles are factoid question-answering datasets where a multi-word answer should be extracted from a context document. This is in contrast to the previous datasets, including CNN/DM, CBT, LAMBADA and our new dataset, which require only single-word answers. Both these datasets however provide less than 130,000 training questions, two orders of magnitude less than our dataset does.
The Story Cloze Test BIBREF22 provides a crowd-sourced corpus of 49,255 commonsense stories for training and 3,744 testing stories with right and wrong endings. Hence the dataset is again rather small. Similarly to LAMBADA, the Story Cloze Test was designed to be easily answerable by humans.
In the WikiReading BIBREF23 dataset the context document is formed from a Wikipedia article and the question-answer pair is taken from the corresponding WikiData page. For each entity (e.g. Hillary Clinton), WikiData contain a number of property-value pairs (e.g. place of birth: Chicago) which form the datasets's question-answer pairs. The dataset is certainly relevant to the community, however the questions are of very limited variety with only 20 properties (and hence unique questions) covering INLINEFORM0 of the dataset. Furthermore many of the frequent properties are mentioned at a set spot within the article (e.g. the date of birth is almost always in brackets behind the name of a person) which may make the task easier for machines. We are trying to provide a more varied dataset.
Although there are several datasets related to task we are aiming to solve, they differ sufficiently for our dataset to bring new value to the community. Its biggest advantage is its size which can furthermore be easily upscaled without expensive human annotation. Finally while we are emphasizing the differences, models could certainly benefit from as diverse a collection of datasets as possible.
Machine Learning Models
A first major work applying deep-learning techniques to text comprehension was Hermann et al. BIBREF1 . This work was followed by the application of Memory Networks to the same task BIBREF2 . Later three models emerged around the same time BIBREF3 , BIBREF4 , BIBREF5 including our psr model BIBREF4 . The AS Reader inspired several subsequent models that use it as a sub-component in a diverse ensemble BIBREF8 ; extend it with a hierarchical structure BIBREF6 , BIBREF24 , BIBREF7 ; compute attention over the context document for every word in the query BIBREF10 or use two-way context-query attention mechanism for every word in the context and the query BIBREF11 that is similar in its spirit to models recently proposed in different domains, e.g. BIBREF25 in information retrieval. Other neural approaches to text comprehension are explored in BIBREF9 , BIBREF12 .
Possible Directions for Improvements
Accuracy in any machine learning tasks can be enhanced either by improving a machine learning model or by using more in-domain training data. Current state of the art models BIBREF6 , BIBREF7 , BIBREF8 , BIBREF11 improve over AS Reader's accuracy on CBT NE and CN datasets by 1-2 percent absolute. This suggests that with current techniques there is only limited room for improvement on the algorithmic side.
The other possibility to improve performance is simply to use more training data. The importance of training data was highlighted by the frequently quoted Mercer's statement that “There is no data like more data.” The observation that having more data is often more important than having better algorithms has been frequently stressed since then BIBREF13 , BIBREF14 .
As a step in the direction of exploiting the potential of more data in the domain of text comprehension, we created a new dataset called BookTest similar to, but much larger than the widely used CBT and CNN/DM datasets.
BookTest
Similarly to the CBT, our BookTest dataset is derived from books available through project Gutenberg. We used 3555 copyright-free books to extract CN examples and 10507 books for NE examples, for comparison the CBT dataset was extracted from just 108 books.
When creating our dataset we follow the same procedure as was used to create the CBT dataset BIBREF2 . That is, we detect whether each sentence contains either a named entity or a common noun that already appeared in one of the preceding twenty sentences. This word is then replaced by a gap tag (XXXXX) in this sentence which is hence turned into a cloze-style question. The preceding 20 sentences are used as the context document. For common noun and named entity detection we use the Stanford POS tagger BIBREF27 and Stanford NER BIBREF28 .
The training dataset consists of the original CBT NE and CN data extended with new NE and CN examples. The new BookTest dataset hence contains INLINEFORM0 training examples and INLINEFORM1 tokens.
The validation dataset consists of INLINEFORM0 NE and INLINEFORM1 CN questions. We have one test set for NE and one for CN, each containing INLINEFORM2 examples. The training, validation and test sets were generated from non-overlapping sets of books.
When generating the dataset we removed all editions of books used to create CBT validation and test sets from our training dataset. Therefore the models trained on the BookTest corpus can be evaluated on the original CBT data and they can be compared with recent text-comprehension models utilizing this dataset BIBREF2 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 .
Baselines
We will now use our psr model to evaluate the performance gain from increasing the dataset size.
AS Reader
In BIBREF4 we introduced the psr , which at the time of publication significantly outperformed all other architectures on the CNN, DM and CBT datasets. This model is built to leverage the fact that the answer is a single word from the context document. Similarly to many other models it uses attention over the document – intuitively a measure of how relevant each word is to answering the question. However while most previous models used this attention as weights to calculate a blended representation of the answer word, we simply sum the attention across all occurrences of each unique words and then simply select the word with the highest sum as the final answer. While simple, this trick seems both to improve accuracy and to speed-up training. It was adopted by many subsequent models BIBREF8 , BIBREF6 , BIBREF7 , BIBREF10 , BIBREF11 , BIBREF24 .
Let us now describe the model in more detail. Figure FIGREF21 may help you in understanding the following paragraphs.
The words from the document and the question are first converted into vector embeddings using a look-up matrix INLINEFORM0 . The document is then read by a bidirectional GRU network BIBREF29 . A concatenation of the hidden states of the forward and backward GRUs at each word is then used as a contextual embedding of this word, intuitively representing the context in which the word is appearing. We can also understand it as representing the set of questions to which this word may be an answer.
Similarly the question is read by a bidirectional GRU but in this case only the final hidden states are concatenated to form the question embedding.
The attention over each word in the context is then calculated as the dot product of its contextual embedding with the question embedding. This attention is then normalized by the softmax function and summed across all occurrences of each answer candidate. The candidate with most accumulated attention is selected as the final answer.
For a more detailed description of the model including equations check BIBREF4 . More details about the training setup and model hyperparameters can be found in the Appendix.
During our past experiments on the CNN, DM and CBT datasets BIBREF4 each unique word from the training, validation and test datasets had its row in the look-up matrix INLINEFORM0 . However as we radically increased the dataset size, this would result in an extremely large number of model parameters so we decided to limit the vocabulary size to INLINEFORM1 most frequent words. For each example, each unique out-of-vocabulary word is now mapped on one of 1000 anonymous tokens which are randomly initialized and untrained. Fixing the embeddings of these anonymous tags proved to significantly improve the performance.
While mostly using the original AS Reader model, we have also tried introducing a minor tweak in some instances of the model. We tried initializing the context encoder GRU's hidden state by letting the encoder read the question first before proceeding to read the context document. Intuitively this allows the encoder to know in advance what to look for when reading over the context document.
Including models of this kind in the ensemble helped to improve the performance.
Results
Table TABREF25 shows the accuracy of the psr and other architectures on the CBT validation and test data. The last two rows show the performance of the psr trained on the BookTest dataset; all the other models have been trained on the original CBT training data.
If we take the best psr ensemble trained on CBT as a baseline, improving the model architecture as in BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , continuing to use the original CBT training data, lead to improvements of INLINEFORM0 and INLINEFORM1 absolute on named entities and common nouns respectively. By contrast, inflating the training dataset provided a boost of INLINEFORM2 while using the same model. The ensemble of our models even exceeded the human baseline provided by Facebook BIBREF2 on the Common Noun dataset.
Our model takes approximately two weeks to converge when trained on the BookTest dataset on a single Nvidia Tesla K40 GPU.
Discussion
Embracing the abundance of data may mean focusing on other aspects of system design than with smaller data. Here are some of the challenges that we need to face in this situation.
Firstly, since the amount of data is practically unlimited – we could even generate them on the fly resulting in continuous learning similar to the Never-Ending Language Learning by Carnegie Mellon University BIBREF30 – it is now the speed of training that determines how much data the model is able to see. Since more training data significantly help the model performance, focusing on speeding up the algorithm may be more important than ever before. This may for instance influence the decision whether to use regularization such as dropout which does seem to somewhat improve the model performance, however usually at a cost of slowing down training.
Thanks to its simplicity, the psr seems to be training fast - for example around seven times faster than the models proposed by Chen et al. BIBREF5 . Hence the psr may be particularly suitable for training on large datasets.
The second challenge is how to generalize the performance gains from large data to a specific target domain. While there are huge amounts of natural language data in general, it may not be the case in the domain where we may want to ultimately apply our model.
Hence we are usually not facing a scenario of simply using a larger amount of the same training data, but rather extending training to a related domain of data, hoping that some of what the model learns on the new data will still help it on the original task.
This is highlighted by our observations from applying a model trained on the BookTest to Children's Book Test test data. If we move model training from joint CBT NE+CN training data to a subset of the BookTest of the same size (230k examples), we see a drop in accuracy of around 10% on the CBT test datasets.
Hence even though the Children's Book Test and BookTest datasets are almost as close as two disjoint datasets can get, the transfer is still very imperfect . Rightly choosing data to augment the in-domain training data is certainly a problem worth exploring in future work.
Our results show that given enough data the AS Reader was able to exceed the human performance on CBT CN reported by Facebook. However we hypothesized that the system is still not achieving its full potential so we decided to examine the room for improvement in our own small human study.
Human Study
After adding more data we have the performance on the CBT validation and test datasets soaring. However is there still potential for much further growth beyond the results we have observed?
We decided to explore the remaining space for improvement on the CBT by testing humans on a random subset of 50 named entity and 50 common noun validation questions that the psr ensemble could not answer correctly. These questions were answered by 10 non-native English speakers from our research laboratory, each on a disjoint subset of questions.. Participants had unlimited time to answer the questions and were told that these questions were not correctly answered by a machine, providing additional motivation to prove they are better than computers. The results of the human study are summarized in Table TABREF28 . They show that a majority of questions that our system could not answer so far are in fact answerable. This suggests that 1) the original human baselines might have been underestimated, however, it might also be the case that there are some examples that can be answered by machines and not by humans; 2) there is still space for improvement.
A system that would answer correctly every time when either our ensemble or human answered correctly would achieve accuracy over 92% percent on both validation and test NE datasets and over 96% on both CN datasets. Hence it still makes sense to use CBT dataset to study further improvements of text-comprehension systems.
Conclusion
Few ways of improving model performance are as solidly established as using more training data. Yet we believe this principle has been somewhat neglected by recent research in text comprehension. While there is a practically unlimited amount of data available in this field, most research was performed on unnecessarily small datasets.
As a gentle reminder to the community we have shown that simply infusing a model with more data can yield performance improvements of up to INLINEFORM0 where several attempts to improve the model architecture on the same training data have given gains of at most INLINEFORM1 compared to our best ensemble result. Yes, experiments on small datasets certainly can bring useful insights. However we believe that the community should also embrace the real-world scenario of data abundance.
The BookTest dataset we are proposing gives the reading-comprehension community an opportunity to make a step in that direction.
Training Details
The training details are similar to those in BIBREF4 however we are including them here for completeness.
To train the model we used stochastic gradient descent with the ADAM update rule BIBREF32 and learning rates of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . The best learning rate in our experiments was INLINEFORM3 . We minimized negative log-likelihood as the training objective.
The initial weights in the word-embedding matrix were drawn randomly uniformly from the interval INLINEFORM0 . Weights in the GRU networks were initialized by random orthogonal matrices BIBREF34 and biases were initialized to zero. We also used a gradient clipping BIBREF33 threshold of 10 and batches of sizes between 32 or 256. Increasing the batch from 32 to 128 seems to significantly improve performance on the large dataset - something we did not observe on the original CBT data. Increasing the batch size much above 128 is currently difficult due to memory constraints of the GPU.
During training we randomly shuffled all examples at the beginning of each epoch. To speed up training, we always pre-fetched 10 batches worth of examples and sorted them according to document length. Hence each batch contained documents of roughly the same length.
We also did not use pre-trained word embeddings.
We did not perform any text pre-processing since the datasets were already tokenized.
During training we evaluated the model performance every 12 hours and at the end of each epoch and stopped training when the error on the 20k BookTest validation set started increasing. We explored the hyperparameter space by training 67 different models The region of the parameter space that we explored together with the parameters of the model with best validation accuracy are summarized in Table TABREF29 .
Our model was implemented using Theano BIBREF31 and Blocks BIBREF35 .
The ensembles were formed by simply averaging the predictions from the constituent single models. These single models were selected using the following algorithm.
We started with the best performing model according to validation performance. Then in each step we tried adding the best performing model that had not been previously tried. We kept it in the ensemble if it did improve its validation performance and discarded it otherwise. This way we gradually tried each model once. We call the resulting model a greedy ensemble. We used the INLINEFORM0 BookTest validation dataset for this procedure.
The algorithm was offered 10 models and selected 5 of them for the final ensemble.
|
How does their ensemble method work?
|
simply averaging the predictions from the constituent single models
| 4,212
|
qasper
|
8k
|
Introduction
There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.
In order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .
Here we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.
In this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.
In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.
For evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:
Related Work
Our work is related to three lines of research: Sequence labeling, Adversarial training, and Crowdsourcing.
Sequence labeling. NER is widely treated as a sequence labeling problem, by assigning a unique label over each sentential word BIBREF9 . Early studies on sequence labeling often use the models of HMM, MEMM, and CRF BIBREF10 based on manually-crafted discrete features, which can suffer the feature sparsity problem and require heavy feature engineering. Recently, neural network models have been successfully applied to sequence labeling BIBREF1 , BIBREF11 , BIBREF2 . Among these work, the model which uses Bi-LSTM for feature extraction and CRF for decoding has achieved state-of-the-art performances BIBREF11 , BIBREF2 , which is exploited as the baseline model in our work.
Adversarial Training. Adversarial Networks have achieved great success in computer vision such as image generation BIBREF12 , BIBREF13 . In the NLP community, the method is mainly exploited under the settings of domain adaption BIBREF14 , BIBREF7 , cross-lingual BIBREF15 , BIBREF6 and multi-task learning BIBREF16 , BIBREF17 . All these settings involve the feature divergences between the training and test examples, and aim to learn invariant features across the divergences by an additional adversarial discriminator, such as domain discriminator. Our work is similar to these work but is applies on crowdsourcing learning, aiming to find invariant features among different crowdsourcing workers.
Crowdsourcing. Most NLP tasks require a massive amount of labeled training data which are annotated by experts. However, hiring experts is costly and non-scalable, both in terms of time and money. Instead, crowdsourcing is another solution to obtain labeled data at a lower cost but with relative lower quality than those from experts. BIBREF18 snow2008cheap collected labeled results for several NLP tasks from Amazon Mechanical Turk and demonstrated that non-experts annotations were quite useful for training new systems. In recent years, a series of work have focused on how to use crowdsourcing data efficiently in tasks such as classification BIBREF19 , BIBREF20 , and compare quality of crowd and expert labels BIBREF21 .
In sequence labeling tasks, BIBREF22 dredze2009sequence viewed this task as a multi-label problem while BIBREF3 rodrigues2014sequence took workers identities into account by assuming that each sentential word was tagged correctly by one of the crowdsourcing workers and proposed a CRF-based model with multiple annotators. BIBREF4 nguyen2017aggregating introduced a crowd representation in which the crowd vectors were added into the LSTM-CRF model at train time, but ignored them at test time. In this paper, we apply adversarial training on crowd annotations on Chinese NER in new domains, and achieve better performances than previous studies on crowdsourcing learning.
Baseline: LSTM-CRF
We use a neural CRF model as the baseline system BIBREF9 , treating NER as a sequence labeling problem over Chinese characters, which has achieved state-of-the-art performances BIBREF5 . To this end, we explore the BIEO schema to convert NER into sequence labeling, following BIBREF2 lample-EtAl:2016:N16-1, where sentential character is assigned with one unique tag. Concretely, we tag the non-entity character by label “O”, the beginning character of an entity by “B-XX”, the ending character of an entity by “E-XX” and the other character of an entity by “I-XX”, where “XX” denotes the entity type.
We build high-level neural features from the input character sequence by a bi-directional LSTM BIBREF2 . The resulting features are combined and then are fed into an output CRF layer for decoding. In summary, the baseline model has three main components. First, we make vector representations for sentential characters $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ , transforming the discrete inputs into low-dimensional neural inputs. Second, feature extraction is performed to obtain high-level features $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ , by using a bi-directional LSTM (Bi-LSTM) structure together with a linear transformation over $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ . Third, we apply a CRF tagging module over $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ , obtaining the final output NE labels. The overall framework of the baseline model is shown by the right part of Figure 1 .
Vector Representation of Characters
To represent Chinese characters, we simply exploit a neural embedding layer to map discrete characters into the low-dimensional vector representations. The goal is achieved by a looking-up table $\mathbf {E}^W$ , which is a model parameter and will be fine-tuned during training. The looking-up table can be initialized either by random or by using a pretrained embeddings from large scale raw corpus. For a given Chinese character sequence $c_1c_2\cdots c_n$ , we obtain the vector representation of each sentential character by: $ \mathbf {x}_t = \text{look-up}(c_t, \mathbf {E}^W), \text{~~~} t \in [1, n]$ .
Feature Extraction
Based on the vector sequence $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ , we extract higher-level features $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ by using a bidirectional LSTM module and a simple feed-forward neural layer, which are then used for CRF tagging at the next step.
LSTM is a type of recurrent neural network (RNN), which is designed for solving the exploding and diminishing gradients of basic RNNs BIBREF23 . It has been widely used in a number of NLP tasks, including POS-tagging BIBREF11 , BIBREF24 , parsing BIBREF25 and machine translation BIBREF26 , because of its strong capabilities of modeling natural language sentences.
By traversing $\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n$ by order and reversely, we obtain the output features $\mathbf {h}_1^{\text{private}}\mathbf {h}_2^{\text{private}}\cdots \mathbf {h}_n^{\text{private}}$ of the bi-LSTM, where $\mathbf {h}_t^{\text{private}} = \overrightarrow{\mathbf {h}}_t \oplus \overleftarrow{\mathbf {h}}_t $ . Here we refer this Bi-LSTM as private in order to differentiate it with the common Bi-LSTM over the same character inputs which will be introduced in the next section.
Further we make an integration of the output vectors of bi-directional LSTM by a linear feed-forward neural layer, resulting in the features $\mathbf {h}_1^{\text{ner}}\mathbf {h}_2^{\text{ner}}\cdots \mathbf {h}_n^{\text{ner}}$ by equation:
$$\mathbf {h}_t^{\text{ner}} = \mathbf {W} \mathbf {h}_t^{\text{private}} + \mathbf {b},$$ (Eq. 6)
where $\mathbf {W}$ and $\mathbf {b}$ are both model parameters.
CRF Tagging
Finally we feed the resulting features $\mathbf {h}_t^{\text{ner}}, t\in [1, n]$ into a CRF layer directly for NER decoding. CRF tagging is one globally normalized model, aiming to find the best output sequence considering the dependencies between successive labels. In the sequence labeling setting for NER, the output label of one position has a strong dependency on the label of the previous position. For example, the label before “I-XX” must be either “B-XX” or “I-XX”, where “XX” should be exactly the same.
CRF involves two parts for prediction. First we should compute the scores for each label based $\mathbf {h}_t^{\text{ner}}$ , resulting in $\mathbf {o}_t^{\text{ner}}$ , whose dimension is the number of output labels. The other part is a transition matrix $\mathbf {T}$ which defines the scores of two successive labels. $\mathbf {T}$ is also a model parameter. Based on $\mathbf {o}_t^{\text{ner}}$ and $\mathbf {T}$ , we use the Viterbi algorithm to find the best-scoring label sequence.
We can formalize the CRF tagging process as follows:
$$\begin{split} & \mathbf {o}_t^{\text{ner}} = \mathbf {W}^{\text{ner}} \mathbf {h}_t^{\text{ner}}, \text{~~~~} t \in [1,n] \\ & \text{score}(\mathbf {X}, \mathbf {y}) = \sum _{t = 1}^{n}(\mathbf {o}_{t,y_t} + T_{y_{t-1},y_t}) \\ & \mathbf {y}^{\text{ner}} = \mathop {arg~max}_{\mathbf {y}}\big (\text{score}(\mathbf {X}, \mathbf {y}))\big ), \\ \end{split}$$ (Eq. 8)
where $\text{score}(\cdot )$ is the scoring function for a given output label sequence $\mathbf {y} = y_1y_2 \cdots y_n$ based on input $\mathbf {X}$ , $\mathbf {y}^{\text{ner}}$ is the resulting label sequence, $\mathbf {W}^{\text{ner}}$ is a model parameter.
Training
To train model parameters, we exploit a negative log-likelihood objective as the loss function. We apply softmax over all candidate output label sequences, thus the probability of the crowd-annotated label sequence is computed by:
$$p(\mathbf {\bar{y}}|\mathbf {X}) = \frac{\exp \big (\text{score}(\mathbf {X}, \mathbf {\bar{y}})\big )}{\sum _{\mathbf {y} \in \mathbf {Y}_{\mathbf {X}}} \exp \big (\text{score}(\mathbf {X}, \mathbf {y})\big )},$$ (Eq. 10)
where $\mathbf {\bar{y}}$ is the crowd-annotated label sequences and $\mathbf {Y}_{\mathbf {X}}$ is all candidate label sequence of input $\mathbf {X}$ .
Based on the above formula, the loss function of our baseline model is:
$$\text{loss}(\Theta , \mathbf {X}, \mathbf {\bar{y}}) = -\log p(\mathbf {\bar{y}}|\mathbf {X}),$$ (Eq. 11)
where $\Theta $ is the set of all model parameters. We use standard back-propagation method to minimize the loss function of the baseline CRF model.
Worker Adversarial
Adversarial learning has been an effective mechanism to resolve the problem of the input features between the training and test examples having large divergences BIBREF27 , BIBREF13 . It has been successfully applied on domain adaption BIBREF7 , cross-lingual learning BIBREF15 and multi-task learning BIBREF17 . All settings involve feature shifting between the training and testing.
In this paper, our setting is different. We are using the annotations from non-experts, which are noise and can influence the final performances if they are not properly processed. Directly learning based on the resulting corpus may adapt the neural feature extraction into the biased annotations. In this work, we assume that individual workers have their own guidelines in mind after short training. For example, a perfect worker can annotate highly consistently with an expert, while common crowdsourcing workers may be confused and have different understandings on certain contexts. Based on the assumption, we make an adaption for the original adversarial neural network to our setting.
Our adaption is very simple. Briefly speaking, the original adversarial learning adds an additional discriminator to classify the type of source inputs, for example, the domain category in the domain adaption setting, while we add a discriminator to classify the annotation workers. Solely the features from the input sentence is not enough for worker classification. The annotation result of the worker is also required. Thus the inputs of our discriminator are different. Here we exploit both the source sentences and the crowd-annotated NE labels as basic inputs for the worker discrimination.
In the following, we describe the proposed adversarial learning module, including both the submodels and the training method. As shown by the left part of Figure 1 , the submodel consists of four parts: (1) a common Bi-LSTM over input characters; (2) an additional Bi-LSTM to encode crowd-annotated NE label sequence; (3) a convolutional neural network (CNN) to extract features for worker discriminator; (4) output and prediction.
Common Bi-LSTM over Characters
To build the adversarial part, first we create a new bi-directional LSTM, named by the common Bi-LSTM:
$$\mathbf {h}_1^{\text{\tiny common}} \mathbf {h}_2^{\text{\tiny common}} \cdots \mathbf {h}_n^{\text{\tiny common}} = \text{Bi-LSTM}(\mathbf {x}_1\mathbf {x}_2\cdots \mathbf {x}_n).$$ (Eq. 13)
As shown in Figure 1 , this Bi-LSTM is constructed over the same input character representations of the private Bi-LSTM, in order to extract worker independent features.
The resulting features of the common Bi-LSTM are used for both NER and the worker discriminator, different with the features of private Bi-LSTM which are used for NER only. As shown in Figure 1 , we concatenate the outputs of the common and private Bi-LSTMs together, and then feed the results into the feed-forward combination layer of the NER part. Thus Formula 6 can be rewritten as:
$$\mathbf {h}_t^{\text{ner}} = \mathbf {W} (\mathbf {h}_t^{\text{common}} \oplus \mathbf {h}_t^{\text{private}}) + \mathbf {b},$$ (Eq. 14)
where $\mathbf {W}$ is wider than the original combination because the newly-added $\mathbf {h}_t^{\text{common}}$ .
Noticeably, although the resulting common features are used for the worker discriminator, they actually have no capability to distinguish the workers. Because this part is exploited to maximize the loss of the worker discriminator, it will be interpreted in the later training subsection. These features are invariant among different workers, thus they can have less noises for NER. This is the goal of adversarial learning, and we hope the NER being able to find useful features from these worker independent features.
Additional Bi-LSTM over Annotated NER Labels
In order to incorporate the annotated NE labels to predict the exact worker, we build another bi-directional LSTM (named by label Bi-LSTM) based on the crowd-annotated NE label sequence. This Bi-LSTM is used for worker discriminator only. During the decoding of the testing phase, we will never have this Bi-LSTM, because the worker discriminator is no longer required.
Assuming the crowd-annotated NE label sequence annotated by one worker is $\mathbf {\bar{y}} = \bar{y}_1\bar{y}_2 \cdots \bar{y}_n$ , we exploit a looking-up table $\mathbf {E}^{L}$ to obtain the corresponding sequence of their vector representations $\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n$ , similar to the method that maps characters into their neural representations. Concretely, for one NE label $\bar{y}_t$ ( $t \in [1, n]$ ), we obtain its neural vector by: $\mathbf {x^{\prime }}_t = \text{look-up}(\bar{y}_t, \mathbf {E}^L)$ .
Next step we apply bi-directional LSTM over the sequence $\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n$ , which can be formalized as:
$$\mathbf {h}_1^{\text{label}} \mathbf {h}_2^{\text{label}} \cdots \mathbf {h}_n^{\text{label}} = \text{Bi-LSTM}(\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n).$$ (Eq. 16)
The resulting feature sequence is concatenated with the outputs of the common Bi-LSTM, and further be used for worker classification.
CNN
Following, we add a convolutional neural network (CNN) module based on the concatenated outputs of the common Bi-LSTM and the label Bi-LSTM, to produce the final features for worker discriminator. A convolutional operator with window size 5 is used, and then max pooling strategy is applied over the convolution sequence to obtain the final fixed-dimensional feature vector. The whole process can be described by the following equations:
$$\begin{split} &\mathbf {h}_t^{\text{worker}} = \mathbf {h}_t^{\text{common}} \oplus \mathbf {h}_t^{\text{label}} \\ &\mathbf {\tilde{h}}_t^{\text{worker}} = \tanh (\mathbf {W}^{\text{cnn}}[\mathbf {h}_{t-2}^{\text{worker}}, \mathbf {h}_{t-1}^{\text{worker}}, \cdots , \mathbf {h}_{t+2}^{\text{worker}}]) \\ &\mathbf {h}^{\text{worker}} = \text{max-pooling}(\mathbf {\tilde{h}}_1^{\text{worker}}\mathbf {\tilde{h}}_2^{\text{worker}} \cdots \mathbf {\tilde{h}}_n^{\text{worker}}) \\ \end{split}$$ (Eq. 18)
where $t \in [1,n]$ and $\mathbf {W}^{\text{cnn}}$ is one model parameter. We exploit zero vector to paddle the out-of-index vectors.
Output and Prediction
After obtaining the final feature vector for the worker discriminator, we use it to compute the output vector, which scores all the annotation workers. The score function is defined by:
$$\mathbf {o}^{\text{worker}} = \mathbf {W}^{\text{worker}} \mathbf {h}^{\text{worker}},$$ (Eq. 20)
where $\mathbf {W}^{\text{worker}}$ is one model parameter and the output dimension equals the number of total non-expert annotators. The prediction is to find the worker which is responsible for this annotation.
Adversarial Training
The training objective with adversarial neural network is different from the baseline model, as it includes the extra worker discriminator. Thus the new objective includes two parts, one being the negative log-likelihood from NER which is the same as the baseline, and the other being the negative the log-likelihood from the worker discriminator.
In order to obtain the negative log-likelihood of the worker discriminator, we use softmax to compute the probability of the actual worker $\bar{z}$ as well, which is defined by:
$$p(\bar{z}|\mathbf {X}, \mathbf {\bar{y}}) = \frac{\exp (\mathbf {o}^{\text{worker}}_{\bar{z}})}{\sum _{z} \exp (\mathbf {o}^{\text{worker}}_z)},$$ (Eq. 22)
where $z$ should enumerate all workers.
Based on the above definition of probability, our new objective is defined as follows:
$$\begin{split} \text{R}(\Theta , \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) &= \text{loss}(\Theta , \mathbf {X}, \mathbf {\bar{y}}) - \text{loss}(\Theta , \Theta ^{\prime }, \mathbf {X}) \\ \text{~~~~~~} &= -\log p(\mathbf {\bar{y}}|\mathbf {X}) + \log p(\bar{z}|\mathbf {X}, \mathbf {\bar{y}}), \end{split}$$ (Eq. 23)
where $\Theta $ is the set of all model parameters related to NER, and $\Theta ^{\prime }$ is the set of the remaining parameters which are only related to the worker discriminator, $\mathbf {X}$ , $\mathbf {\bar{y}}$ and $\bar{z}$ are the input sentence, the crowd-annotated NE labels and the corresponding annotator for this annotation, respectively. It is worth noting that the parameters of the common Bi-LSTM are included in the set of $\Theta $ by definition.
In particular, our goal is not to simply minimize the new objective. Actually, we aim for a saddle point, finding the parameters $\Theta $ and $\Theta ^{\prime }$ satisfying the following conditions:
$$\begin{split} \hat{\Theta } &= \mathop {arg~min}_{\Theta }\text{R}(\Theta , \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) \\ \hat{\Theta }^{\prime } &= \mathop {arg~max}_{\Theta ^{\prime }}\text{R}(\hat{\Theta }, \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) \\ \end{split}$$ (Eq. 24)
where the first equation aims to find one $\Theta $ that minimizes our new objective $\text{R}(\cdot )$ , and the second equation aims to find one $\Theta ^{\prime }$ maximizing the same objective.
Intuitively, the first equation of Formula 24 tries to minimize the NER loss, but at the same time maximize the worker discriminator loss by the shared parameters of the common Bi-LSTM. Thus the resulting features of common Bi-LSTM actually attempt to hurt the worker discriminator, which makes these features worker independent since they are unable to distinguish different workers. The second equation tries to minimize the worker discriminator loss by its own parameter $\Theta ^{\prime }$ .
We use the standard back-propagation method to train the model parameters, the same as the baseline model. In order to incorporate the term of the argmax part of Formula 24 , we follow the previous work of adversarial training BIBREF13 , BIBREF15 , BIBREF17 , by introducing a gradient reverse layer between the common Bi-LSTM and the CNN module, whose forward does nothing but the backward simply negates the gradients.
Data Sets
With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences.
Labeled Data: DL-PS. In Dialog domain (DL), we collect raw sentences from a chatbot application. And then we randomly select 20K sentences as our pool and hire 43 students to annotate the sentences. We ask the annotators to label two types of entities: Person-Name and Song-Name. The annotators label the sentences independently. In particular, each sentence is assigned to three annotators for this data. Although the setting can be wasteful of labor, we can use the resulting dataset to test several well-known baselines such as majority voting.
After annotation, we remove some illegal sentences reported by the annotators. Finally, we have 16,948 sentences annotated by the students. Table 1 shows the information of annotated data. The average Kappa value among the annotators is 0.6033, indicating that the crowd annotators have moderate agreement on identifying entities on this data.
In order to evaluate the system performances, we create a set of corpus with gold annotations. Concretely, we randomly select 1,000 sentences from the final dataset and let two experts generate the gold annotations. Among them, we use 300 sentences as the development set and the remaining 700 as the test set. The rest sentences with only student annotations are used as the training set.
Labeled data: EC-MT and EC-UQ. In E-commerce domain (EC), we collect raw sentences from two types of texts: one is titles of merchandise entries (EC-MT) and another is user queries (EC-UQ). The annotators label five types of entities: Brand, Product, Model, Material, and Specification. These five types of entities are very important for E-commerce platform, for example building knowledge graph of merchandises. Five students participate the annotations for this domain since the number of sentences is small. We use the similar strategy as DL-PS to annotate the sentences, except that only two annotators are assigned for each sentence, because we aim to test the system performances under very small duplicated annotations.
Finally, we obtain 2,337 sentences for EC-MT and 2,300 for EC-UQ. Table 1 shows the information of annotated results. Similarly, we produce the development and test datasets for system evaluation, by randomly selecting 400 sentences and letting two experts to generate the groundtruth annotations. Among them, we use 100 sentences as the development set and the remaining 300 as the test set. The rest sentences with only crowdsourcing annotations are used as the training set.
Unlabeled data. The vector representations of characters are basic inputs of our baseline and proposed models, which are obtained by the looking-up table $\mathbf {E}^W$ . As introduced before, we can use pretrained embeddings from large-scale raw corpus to initialize the table. In order to pretrain the character embeddings, we use one large-scale unlabeled data from the user-generated content in Internet. Totally, we obtain a number of 5M sentences. Finally, we use the tool word2vec to pretrain the character embeddings based on the unlabeled dataset in our experiments.
Settings
For evaluation, we use the entity-level metrics of Precision (P), Recall (R), and their F1 value in our experiments, treating one tagged entity as correct only when it matches the gold entity exactly.
There are several hyper-parameters in the baseline LSTM-CRF and our final models. We set them empirically by the development performances. Concretely, we set the dimension size of the character embeddings by 100, the dimension size of the NE label embeddings by 50, and the dimension sizes of all the other hidden features by 200.
We exploit online training with a mini-batch size 128 to learn model parameters. The max-epoch iteration is set by 200, and the best-epoch model is chosen according to the development performances. We use RMSprop BIBREF28 with a learning rate $10^{-3}$ to update model parameters, and use $l_2$ -regularization by a parameter $10^{-5}$ . We adopt the dropout technique to avoid overfitting by a drop value of $0.2$ .
Comparison Systems
The proposed approach (henceforward referred to as “ALCrowd”) is compared with the following systems:
CRF: We use the Crfsuite tool to train a model on the crowdsourcing labeled data. As for the feature settings, we use the supervised version of BIBREF0 zhao2008unsupervised.
CRF-VT: We use the same settings of the CRF system, except that the training data is the voted version, whose groundtruths are produced by majority voting at the character level for each annotated sentence.
CRF-MA: The CRF model proposed by BIBREF3 rodrigues2014sequence, which uses a prior distributation to model multiple crowdsourcing annotators. We use the source code provided by the authors.
LSTM-CRF: Our baseline system trained on the crowdsourcing labeled data.
LSTM-CRF-VT: Our baseline system trained on the voted corpus, which is the same as CRF-VT.
LSTM-Crowd: The LSTM-CRF model with crowd annotation learning proposed by BIBREF4 nguyen2017aggregating. We use the source code provided by the authors.
The first three systems are based on the CRF model using traditional handcrafted features, and the last three systems are based on the neural LSTM-CRF model. Among them, CRF-MA, LSTM-Crowd and our system with adversarial learning (ALCrowd) are based on crowd annotation learning that directly trains the model on the crowd-annotations. Five systems, including CRF, CRF-MA, LSTM-CRF, LSTM-Crowd, and ALCrowd, are trained on the original version of labeled data, while CRF-VT and LSTM-CRF-VT are trained on the voted version. Since CRF-VT, CRF-MA and LSTM-CRF-VT all require ground-truth answers for each training sentence, which are difficult to be produced with only two annotations, we do not apply the three models on the two EC datasets.
Main Results
In this section, we show the model performances of our proposed crowdsourcing learning system (ALCrowd), and meanwhile compare it with the other systems mentioned above. Table 2 shows the experimental results on the DL-PS datasets and Table 3 shows the experiment results on the EC-MT and EC-UQ datasets, respectively.
The results of CRF and LSTM-CRF mean that the crowd annotation is an alternative solution with low cost for labeling data that could be used for training a NER system even there are some inconsistencies. Compared with CRF, LSTM-CRF achieves much better performances on all the three data, showing +6.12 F1 improvement on DL-PS, +4.51 on EC-MT, and +9.19 on EC-UQ. This indicates that LSTM-CRF is a very strong baseline system, demonstrating the effectiveness of neural network.
Interestingly, when compared with CRF and LSTM-CRF, CRF-VT and LSTM-CRF-VT trained on the voted version perform worse in the DL-PS dataset. This trend is also mentioned in BIBREF4 nguyen2017aggregating. This fact shows that the majority voting method might be unsuitable for our task. There are two possible reasons accounting for the observation. On the one hand, simple character-level voting based on three annotations for each sentence may be still not enough. In the DL-PS dataset, even with only two predefined entity types, one character can have nine NE labels. Thus the majority-voting may be incapable of handling some cases. While the cost by adding more annotations for each sentence would be greatly increased. On the other hand, the lost information produced by majority-voting may be important, at least the ambiguous annotations denote that the input sentence is difficult for NER. The normal CRF and LSTM-CRF models without discard any annotations can differentiate these difficult contexts through learning.
Three crowd-annotation learning systems provide better performances than their counterpart systems, (CRF-MA VS CRF) and (LSTM-Crowd/ALCrowd VS LSTM-CRF). Compared with the strong baseline LSTM-CRF, ALCrowd shows its advantage with +1.08 F1 improvements on DL-PS, +1.24 on EC-MT, and +2.38 on EC-UQ, respectively. This indicates that adding the crowd-annotation learning is quite useful for building NER systems. In addition, ALCrowd also outperforms LSTM-Crowd on all the datasets consistently, demonstrating the high effectiveness of ALCrowd in extracting worker independent features. Among all the systems, ALCrowd performs the best, and significantly better than all the other models (the p-value is below $10^{-5}$ by using t-test). The results indicate that with the help of adversarial training, our system can learn a better feature representation from crowd annotation.
Discussion
Impact of Character Embeddings. First, we investigate the effect of the pretrained character embeddings in our proposed crowdsourcing learning model. The comparison results are shown in Figure 2 , where Random refers to the random initialized character embeddings, and Pretrained refers to the embeddings pretrained on the unlabeled data. According to the results, we find that our model with the pretrained embeddings significantly outperforms that using the random embeddings, demonstrating that the pretrained embeddings successfully provide useful information.
Case Studies. Second, we present several case studies in order to study the differences between our baseline and the worker adversarial models. We conduct a closed test on the training set, the results of which can be regarded as modifications of the training corpus, since there exist inconsistent annotations for each training sentence among the different workers. Figure 3 shows the two examples from the DL-PS dataset, which compares the outputs of the baseline and our final models, as well as the majority-voting strategy.
In the first case, none of the annotations get the correct NER result, but our proposed model can capture it. The result of LSTM-CRF is the same as majority-voting. In the second example, the output of majority-voting is the worst, which can account for the reason why the same model trained on the voted corpus performs so badly, as shown in Table 2 . The model of LSTM-CRF fails to recognize the named entity “Xiexie” because of not trusting the second annotation, treating it as one noise annotation. Our proposed model is able to recognize it, because of its ability of extracting worker independent features.
Conclusions
In this paper, we presented an approach to performing crowd annotation learning based on the idea of adversarial training for Chinese Named Entity Recognition (NER). In our approach, we use a common and private Bi-LSTMs for representing annotator-generic and -specific information, and learn a label Bi-LSTM from the crowd-annotated NE label sequences. Finally, the proposed approach adopts a LSTM-CRF model to perform tagging. In our experiments, we create two data sets for Chinese NER tasks in the dialog and e-commerce domains. The experimental results show that the proposed approach outperforms strong baseline systems.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grant No. 61572338, 61525205, and 61602160). This work is also partially supported by the joint research project of Alibaba and Soochow University. Wenliang is also partially supported by Collaborative Innovation Center of Novel Software Technology and Industrialization.
|
What accuracy does the proposed system achieve?
|
F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data
| 5,310
|
qasper
|
8k
|
Introduction
This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Deep neural networks have been widely used in text classification and have achieved promising results BIBREF0 , BIBREF1 , BIBREF2 . Most focus on content information and use models such as convolutional neural networks (CNN) BIBREF3 or recursive neural networks BIBREF4 . However, for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored. On social media platforms, a user can act either as the author of a post or as a reader who expresses his or her comments about the post.
In this paper, we classify posts taking into account post authorship, likes, topics, and comments. In particular, users and their “likes” hold strong potential for text mining. For example, given a set of posts that are related to a specific topic, a user's likes and dislikes provide clues for stance labeling. From a user point of view, users with positive attitudes toward the issue leave positive comments on the posts with praise or even just the post's content; from a post point of view, positive posts attract users who hold positive stances. We also investigate the influence of topics: different topics are associated with different stance labeling tendencies and word usage. For example we discuss women's rights and unwanted babies on the topic of abortion, but we criticize medicine usage or crime when on the topic of marijuana BIBREF5 . Even for posts on a specific topic like nuclear power, a variety of arguments are raised: green energy, radiation, air pollution, and so on. As for comments, we treat them as additional text information. The arguments in the comments and the commenters (the users who leave the comments) provide hints on the post's content and further facilitate stance classification.
In this paper, we propose the user-topic-comment neural network (UTCNN), a deep learning model that utilizes user, topic, and comment information. We attempt to learn user and topic representations which encode user interactions and topic influences to further enhance text classification, and we also incorporate comment information. We evaluate this model on a post stance classification task on forum-style social media platforms. The contributions of this paper are as follows: 1. We propose UTCNN, a neural network for text in modern social media channels as well as legacy social media, forums, and message boards — anywhere that reveals users, their tastes, as well as their replies to posts. 2. When classifying social media post stances, we leverage users, including authors and likers. User embeddings can be generated even for users who have never posted anything. 3. We incorporate a topic model to automatically assign topics to each post in a single topic dataset. 4. We show that overall, the proposed method achieves the highest performance in all instances, and that all of the information extracted, whether users, topics, or comments, still has its contributions.
Extra-Linguistic Features for Stance Classification
In this paper we aim to use text as well as other features to see how they complement each other in a deep learning model. In the stance classification domain, previous work has showed that text features are limited, suggesting that adding extra-linguistic constraints could improve performance BIBREF6 , BIBREF7 , BIBREF8 . For example, Hasan and Ng as well as Thomas et al. require that posts written by the same author have the same stance BIBREF9 , BIBREF10 . The addition of this constraint yields accuracy improvements of 1–7% for some models and datasets. Hasan and Ng later added user-interaction constraints and ideology constraints BIBREF7 : the former models the relationship among posts in a sequence of replies and the latter models inter-topic relationships, e.g., users who oppose abortion could be conservative and thus are likely to oppose gay rights.
For work focusing on online forum text, since posts are linked through user replies, sequential labeling methods have been used to model relationships between posts. For example, Hasan and Ng use hidden Markov models (HMMs) to model dependent relationships to the preceding post BIBREF9 ; Burfoot et al. use iterative classification to repeatedly generate new estimates based on the current state of knowledge BIBREF11 ; Sridhar et al. use probabilistic soft logic (PSL) to model reply links via collaborative filtering BIBREF12 . In the Facebook dataset we study, we use comments instead of reply links. However, as the ultimate goal in this paper is predicting not comment stance but post stance, we treat comments as extra information for use in predicting post stance.
Deep Learning on Extra-Linguistic Features
In recent years neural network models have been applied to document sentiment classification BIBREF13 , BIBREF4 , BIBREF14 , BIBREF15 , BIBREF2 . Text features can be used in deep networks to capture text semantics or sentiment. For example, Dong et al. use an adaptive layer in a recursive neural network for target-dependent Twitter sentiment analysis, where targets are topics such as windows 7 or taylor swift BIBREF16 , BIBREF17 ; recursive neural tensor networks (RNTNs) utilize sentence parse trees to capture sentence-level sentiment for movie reviews BIBREF4 ; Le and Mikolov predict sentiment by using paragraph vectors to model each paragraph as a continuous representation BIBREF18 . They show that performance can thus be improved by more delicate text models.
Others have suggested using extra-linguistic features to improve the deep learning model. The user-word composition vector model (UWCVM) BIBREF19 is inspired by the possibility that the strength of sentiment words is user-specific; to capture this they add user embeddings in their model. In UPNN, a later extension, they further add a product-word composition as product embeddings, arguing that products can also show different tendencies of being rated or reviewed BIBREF20 . Their addition of user information yielded 2–10% improvements in accuracy as compared to the above-mentioned RNTN and paragraph vector methods. We also seek to inject user information into the neural network model. In comparison to the research of Tang et al. on sentiment classification for product reviews, the difference is two-fold. First, we take into account multiple users (one author and potentially many likers) for one post, whereas only one user (the reviewer) is involved in a review. Second, we add comment information to provide more features for post stance classification. None of these two factors have been considered previously in a deep learning model for text stance classification. Therefore, we propose UTCNN, which generates and utilizes user embeddings for all users — even for those who have not authored any posts — and incorporates comments to further improve performance.
Method
In this section, we first describe CNN-based document composition, which captures user- and topic-dependent document-level semantic representation from word representations. Then we show how to add comment information to construct the user-topic-comment neural network (UTCNN).
User- and Topic-dependent Document Composition
As shown in Figure FIGREF4 , we use a general CNN BIBREF3 and two semantic transformations for document composition . We are given a document with an engaged user INLINEFORM0 , a topic INLINEFORM1 , and its composite INLINEFORM2 words, each word INLINEFORM3 of which is associated with a word embedding INLINEFORM4 where INLINEFORM5 is the vector dimension. For each word embedding INLINEFORM6 , we apply two dot operations as shown in Equation EQREF6 : DISPLAYFORM0
where INLINEFORM0 models the user reading preference for certain semantics, and INLINEFORM1 models the topic semantics; INLINEFORM2 and INLINEFORM3 are the dimensions of transformed user and topic embeddings respectively. We use INLINEFORM4 to model semantically what each user prefers to read and/or write, and use INLINEFORM5 to model the semantics of each topic. The dot operation of INLINEFORM6 and INLINEFORM7 transforms the global representation INLINEFORM8 to a user-dependent representation. Likewise, the dot operation of INLINEFORM9 and INLINEFORM10 transforms INLINEFORM11 to a topic-dependent representation.
After the two dot operations on INLINEFORM0 , we have user-dependent and topic-dependent word vectors INLINEFORM1 and INLINEFORM2 , which are concatenated to form a user- and topic-dependent word vector INLINEFORM3 . Then the transformed word embeddings INLINEFORM4 are used as the CNN input. Here we apply three convolutional layers on the concatenated transformed word embeddings INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 is the index of words; INLINEFORM1 is a non-linear activation function (we use INLINEFORM2 ); INLINEFORM5 is the convolutional filter with input length INLINEFORM6 and output length INLINEFORM7 , where INLINEFORM8 is the window size of the convolutional operation; and INLINEFORM9 and INLINEFORM10 are the output and bias of the convolution layer INLINEFORM11 , respectively. In our experiments, the three window sizes INLINEFORM12 in the three convolution layers are one, two, and three, encoding unigram, bigram, and trigram semantics accordingly.
After the convolutional layer, we add a maximum pooling layer among convolutional outputs to obtain the unigram, bigram, and trigram n-gram representations. This is succeeded by an average pooling layer for an element-wise average of the three maximized convolution outputs.
UTCNN Model Description
Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.
As for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains.
Finally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post.
Experiment
We start with the experimental dataset and then describe the training process as well as the implementation of the baselines. We also implement several variations to reveal the effects of features: authors, likers, comment, and commenters. In the results section we compare our model with related work.
Dataset
We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms.
The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero.
To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach.
The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 .
The FBFans dataset has more integrated functions than the CreateDebate dataset; thus our model can utilize all linguistic and extra-linguistic features. For the CreateDebate dataset, on the other hand, the like and comment features are not available (as there is a stance label for each reply, replies are evaluated as posts as other previous work) but we still implemented our model using the content, author, and topic information.
Settings
In the UTCNN training process, cross-entropy was used as the loss function and AdaGrad as the optimizer. For FBFans dataset, we learned the 50-dimensional word embeddings on the whole dataset using GloVe BIBREF21 to capture the word semantics; for CreateDebate dataset we used the publicly available English 50-dimensional word embeddings, pre-trained also using GloVe. These word embeddings were fixed in the training process. The learning rate was set to 0.03. All user and topic embeddings were randomly initialized in the range of [-0.1 0.1]. Matrix embeddings for users and topics were sized at 250 ( INLINEFORM0 ); vector embeddings for users and topics were set to length 10.
We applied the LDA topic model BIBREF22 on the FBFans dataset to determine the latent topics with which to build topic embeddings, as there is only one general known topic: nuclear power plants. We learned 100 latent topics and assigned the top three topics for each post. For the CreateDebate dataset, which itself constitutes four topics, the topic labels for posts were used directly without additionally applying LDA.
For the FBFans data we report class-based f-scores as well as the macro-average f-score ( INLINEFORM0 ) shown in equation EQREF19 . DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the average precision and recall of the three class. We adopted the macro-average f-score as the evaluation metric for the overall performance because (1) the experimental dataset is severely imbalanced, which is common for contentious issues; and (2) for stance classification, content in minor-class posts is usually more important for further applications. For the CreateDebate dataset, accuracy was adopted as the evaluation metric to compare the results with related work BIBREF7 , BIBREF9 , BIBREF12 .
Baselines
We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced.
Results on FBFans Dataset
In Table TABREF22 we show the results of UTCNN and the baselines on the FBFans dataset. Here Majority yields good performance on Neu since FBFans is highly biased to the neutral class. The SVM models perform well on Sup and Neu but perform poorly for Uns, showing that content information in itself is insufficient to predict stance labels, especially for the minor class. With the transformed word embedding feature, SVM can achieve comparable performance as SVM with n-gram feature. However, the much fewer feature dimension of the transformed word embedding makes SVM with word embeddings a more efficient choice for modeling the large scale social media dataset. For the CNN and RCNN models, they perform slightly better than most of the SVM models but still, the content information is insufficient to achieve a good performance on the Uns posts. As to adding comment information to these models, since the commenters do not always hold the same stance as the author, simply adding comments and post contents together merely adds noise to the model.
Among all UTCNN variations, we find that user information is most important, followed by topic and comment information. UTCNN without user information shows results similar to SVMs — it does well for Sup and Neu but detects no Uns. Its best f-scores on both Sup and Neu among all methods show that with enough training data, content-based models can perform well; at the same time, the lack of user information results in too few clues for minor-class posts to either predict their stance directly or link them to other users and posts for improved performance. The 17.5% improvement when adding user information suggests that user information is especially useful when the dataset is highly imbalanced. All models that consider user information predict the minority class successfully. UCTNN without topic information works well but achieves lower performance than the full UTCNN model. The 4.9% performance gain brought by LDA shows that although it is satisfactory for single topic datasets, adding that latent topics still benefits performance: even when we are discussing the same topic, we use different arguments and supporting evidence. Lastly, we get 4.8% improvement when adding comment information and it achieves comparable performance to UTCNN without topic information, which shows that comments also benefit performance. For platforms where user IDs are pixelated or otherwise hidden, adding comments to a text model still improves performance. In its integration of user, content, and comment information, the full UTCNN produces the highest f-scores on all Sup, Neu, and Uns stances among models that predict the Uns class, and the highest macro-average f-score overall. This shows its ability to balance a biased dataset and supports our claim that UTCNN successfully bridges content and user, topic, and comment information for stance classification on social media text. Another merit of UTCNN is that it does not require a balanced training data. This is supported by its outperforming other models though no oversampling technique is applied to the UTCNN related experiments as shown in this paper. Thus we can conclude that the user information provides strong clues and it is still rich even in the minority class.
We also investigate the semantic difference when a user acts as an author/liker or a commenter. We evaluated a variation in which all embeddings from the same user were forced to be identical (this is the UTCNN shared user embedding setting in Table TABREF22 ). This setting yielded only a 2.5% improvement over the model without comments, which is not statistically significant. However, when separating authors/likers and commenters embeddings (i.e., the UTCNN full model), we achieved much greater improvements (4.8%). We attribute this result to the tendency of users to use different wording for different roles (for instance author vs commenter). This is observed when the user, acting as an author, attempts to support her argument against nuclear power by using improvements in solar power; when acting as a commenter, though, she interacts with post contents by criticizing past politicians who supported nuclear power or by arguing that the proposed evacuation plan in case of a nuclear accident is ridiculous. Based on this finding, in the final UTCNN setting we train two user matrix embeddings for one user: one for the author/liker role and the other for the commenter role.
Results on CreateDebate Dataset
Table TABREF24 shows the results of UTCNN, baselines as we implemented on the FBFans datset and related work on the CreateDebate dataset. We do not adopt oversampling on these models because the CreateDebate dataset is almost balanced. In previous work, integer linear programming (ILP) or linear-chain conditional random fields (CRFs) were proposed to integrate text features, author, ideology, and user-interaction constraints, where text features are unigram, bigram, and POS-dependencies; the author constraint tends to require that posts from the same author for the same topic hold the same stance; the ideology constraint aims to capture inferences between topics for the same author; the user-interaction constraint models relationships among posts via user interactions such as replies BIBREF7 , BIBREF9 .
The SVM with n-gram or average word embedding feature performs just similar to the majority. However, with the transformed word embedding, it achieves superior results. It shows that the learned user and topic embeddings really capture the user and topic semantics. This finding is not so obvious in the FBFans dataset and it might be due to the unfavorable data skewness for SVM. As for CNN and RCNN, they perform slightly better than most SVMs as we found in Table TABREF22 for FBFans.
Compared to the ILP BIBREF7 and CRF BIBREF9 methods, the UTCNN user embeddings encode author and user-interaction constraints, where the ideology constraint is modeled by the topic embeddings and text features are modeled by the CNN. The significant improvement achieved by UTCNN suggests the latent representations are more effective than overt model constraints.
The PSL model BIBREF12 jointly labels both author and post stance using probabilistic soft logic (PSL) BIBREF23 by considering text features and reply links between authors and posts as in Hasan and Ng's work. Table TABREF24 reports the result of their best AD setting, which represents the full joint stance/disagreement collective model on posts and is hence more relevant to UTCNN. In contrast to their model, the UTCNN user embeddings represent relationships between authors, but UTCNN models do not utilize link information between posts. Though the PSL model has the advantage of being able to jointly label the stances of authors and posts, its performance on posts is lower than the that for the ILP or CRF models. UTCNN significantly outperforms these models on posts and has the potential to predict user stances through the generated user embeddings.
For the CreateDebate dataset, we also evaluated performance when not using topic embeddings or user embeddings; as replies in this dataset are viewed as posts, the setting without comment embeddings is not available. Table TABREF24 shows the same findings as Table TABREF22 : the 21% improvement in accuracy demonstrates that user information is the most vital. This finding also supports the results in the related work: user constraints are useful and can yield 11.2% improvement in accuracy BIBREF7 . Further considering topic information yields 3.4% improvement, suggesting that knowing the subject of debates provides useful information. In sum, Table TABREF22 together with Table TABREF24 show that UTCNN achieves promising performance regardless of topic, language, data distribution, and platform.
Conclusion
We have proposed UTCNN, a neural network model that incorporates user, topic, content and comment information for stance classification on social media texts. UTCNN learns user embeddings for all users with minimum active degree, i.e., one post or one like. Topic information obtained from the topic model or the pre-defined labels further improves the UTCNN model. In addition, comment information provides additional clues for stance classification. We have shown that UTCNN achieves promising and balanced results. In the future we plan to explore the effectiveness of the UTCNN user embeddings for author stance classification.
Acknowledgements
Research of this paper was partially supported by Ministry of Science and Technology, Taiwan, under the contract MOST 104-2221-E-001-024-MY2.
|
What are the baselines?
|
SVM with unigram, bigram, and trigram features, SVM with average word embedding, SVM with average transformed word embeddings, CNN, ecurrent Convolutional Neural Networks, SVM and deep learning models with comment information
| 4,512
|
qasper
|
8k
|
Introduction
The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.
Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.
Our contributions are the following:
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model.
Background ::: The Transformer
In NMT, the Transformer BIBREF0 is a sequence-to-sequence (seq2seq) model which maps an input sequence to an output sequence through hierarchical multi-head attention mechanisms, yielding a dynamic, context-dependent strategy for propagating information within and across sentences. It contrasts with previous seq2seq models, which usually rely either on costly gated recurrent operations BIBREF15, BIBREF16 or static convolutions BIBREF17.
Given $n$ query contexts and $m$ sequence items under consideration, attention mechanisms compute, for each query, a weighted representation of the items. The particular attention mechanism used in BIBREF0 is called scaled dot-product attention, and it is computed in the following way:
where $\mathbf {Q} \in \mathbb {R}^{n \times d}$ contains representations of the queries, $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{m \times d}$ are the keys and values of the items attended over, and $d$ is the dimensionality of these representations. The $\mathbf {\pi }$ mapping normalizes row-wise using softmax, $\mathbf {\pi }(\mathbf {Z})_{ij} = \operatornamewithlimits{\mathsf {softmax}}(\mathbf {z}_i)_j$, where
In words, the keys are used to compute a relevance score between each item and query. Then, normalized attention weights are computed using softmax, and these are used to weight the values of each item at each query context.
However, for complex tasks, different parts of a sequence may be relevant in different ways, motivating multi-head attention in Transformers. This is simply the application of Equation DISPLAY_FORM7 in parallel $H$ times, each with a different, learned linear transformation that allows specialization:
In the Transformer, there are three separate multi-head attention mechanisms for distinct purposes:
Encoder self-attention: builds rich, layered representations of each input word, by attending on the entire input sentence.
Context attention: selects a representative weighted average of the encodings of the input words, at each time step of the decoder.
Decoder self-attention: attends over the partial output sentence fragment produced so far.
Together, these mechanisms enable the contextualized flow of information between the input sentence and the sequential decoder.
Background ::: Sparse Attention
The softmax mapping (Equation DISPLAY_FORM8) is elementwise proportional to $\exp $, therefore it can never assign a weight of exactly zero. Thus, unnecessary items are still taken into consideration to some extent. Since its output sums to one, this invariably means less weight is assigned to the relevant items, potentially harming performance and interpretability BIBREF18. This has motivated a line of research on learning networks with sparse mappings BIBREF19, BIBREF20, BIBREF21, BIBREF22. We focus on a recently-introduced flexible family of transformations, $\alpha $-entmax BIBREF23, BIBREF14, defined as:
where $\triangle ^d \lbrace \mathbf {p}\in \mathbb {R}^d:\sum _{i} p_i = 1\rbrace $ is the probability simplex, and, for $\alpha \ge 1$, $\mathsf {H}^{\textsc {T}}_\alpha $ is the Tsallis continuous family of entropies BIBREF24:
This family contains the well-known Shannon and Gini entropies, corresponding to the cases $\alpha =1$ and $\alpha =2$, respectively.
Equation DISPLAY_FORM14 involves a convex optimization subproblem. Using the definition of $\mathsf {H}^{\textsc {T}}_\alpha $, the optimality conditions may be used to derive the following form for the solution (Appendix SECREF83):
where $[\cdot ]_+$ is the positive part (ReLU) function, $\mathbf {1}$ denotes the vector of all ones, and $\tau $ – which acts like a threshold – is the Lagrange multiplier corresponding to the $\sum _i p_i=1$ constraint.
Background ::: Sparse Attention ::: Properties of @!START@$\alpha $@!END@-entmax.
The appeal of $\alpha $-entmax for attention rests on the following properties. For $\alpha =1$ (i.e., when $\mathsf {H}^{\textsc {T}}_\alpha $ becomes the Shannon entropy), it exactly recovers the softmax mapping (We provide a short derivation in Appendix SECREF89.). For all $\alpha >1$ it permits sparse solutions, in stark contrast to softmax. In particular, for $\alpha =2$, it recovers the sparsemax mapping BIBREF19, which is piecewise linear. In-between, as $\alpha $ increases, the mapping continuously gets sparser as its curvature changes.
To compute the value of $\alpha $-entmax, one must find the threshold $\tau $ such that the r.h.s. in Equation DISPLAY_FORM16 sums to one. BIBREF23 propose a general bisection algorithm. BIBREF14 introduce a faster, exact algorithm for $\alpha =1.5$, and enable using $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ with fixed $\alpha $ within a neural network by showing that the $\alpha $-entmax Jacobian w.r.t. $\mathbf {z}$ for $\mathbf {p}^\star = \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ is
Our work furthers the study of $\alpha $-entmax by providing a derivation of the Jacobian w.r.t. the hyper-parameter $\alpha $ (Section SECREF3), thereby allowing the shape and sparsity of the mapping to be learned automatically. This is particularly appealing in the context of multi-head attention mechanisms, where we shall show in Section SECREF35 that different heads tend to learn different sparsity behaviors.
Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax
We now propose a novel Transformer architecture wherein we simply replace softmax with $\alpha $-entmax in the attention heads. Concretely, we replace the row normalization $\mathbf {\pi }$ in Equation DISPLAY_FORM7 by
This change leads to sparse attention weights, as long as $\alpha >1$; in particular, $\alpha =1.5$ is a sensible starting point BIBREF14.
Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax ::: Different @!START@$\alpha $@!END@ per head.
Unlike LSTM-based seq2seq models, where $\alpha $ can be more easily tuned by grid search, in a Transformer, there are many attention heads in multiple layers. Crucial to the power of such models, the different heads capture different linguistic phenomena, some of them isolating important words, others spreading out attention across phrases BIBREF0. This motivates using different, adaptive $\alpha $ values for each attention head, such that some heads may learn to be sparser, and others may become closer to softmax. We propose doing so by treating the $\alpha $ values as neural network parameters, optimized via stochastic gradients along with the other weights.
Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax ::: Derivatives w.r.t. @!START@$\alpha $@!END@.
In order to optimize $\alpha $ automatically via gradient methods, we must compute the Jacobian of the entmax output w.r.t. $\alpha $. Since entmax is defined through an optimization problem, this is non-trivial and cannot be simply handled through automatic differentiation; it falls within the domain of argmin differentiation, an active research topic in optimization BIBREF25, BIBREF26.
One of our key contributions is the derivation of a closed-form expression for this Jacobian. The next proposition provides such an expression, enabling entmax layers with adaptive $\alpha $. To the best of our knowledge, ours is the first neural network module that can automatically, continuously vary in shape away from softmax and toward sparse mappings like sparsemax.
Proposition 1 Let $\mathbf {p}^\star \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ be the solution of Equation DISPLAY_FORM14. Denote the distribution $\tilde{p}_i {(p_i^\star )^{2 - \alpha }}{ \sum _j(p_j^\star )^{2-\alpha }}$ and let $h_i -p^\star _i \log p^\star _i$. The $i$th component of the Jacobian $\mathbf {g} \frac{\partial \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})}{\partial \alpha }$ is
proof uses implicit function differentiation and is given in Appendix SECREF10.
Proposition UNKREF22 provides the remaining missing piece needed for training adaptively sparse Transformers. In the following section, we evaluate this strategy on neural machine translation, and analyze the behavior of the learned attention heads.
Experiments
We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations:
1.5-entmax: a Transformer with sparse entmax attention with fixed $\alpha =1.5$ for all heads. This is a novel model, since 1.5-entmax had only been proposed for RNN-based NMT models BIBREF14, but never in Transformers, where attention modules are not just one single component of the seq2seq model but rather an integral part of all of the model components.
$\alpha $-entmax: an adaptive Transformer with sparse entmax attention with a different, learned $\alpha _{i,j}^t$ for each head.
The adaptive model has an additional scalar parameter per attention head per layer for each of the three attention mechanisms (encoder self-attention, context attention, and decoder self-attention), i.e.,
and we set $\alpha _{i,j}^t = 1 + \operatornamewithlimits{\mathsf {sigmoid}}(a_{i,j}^t) \in ]1, 2[$. All or some of the $\alpha $ values can be tied if desired, but we keep them independent for analysis purposes.
Experiments ::: Datasets.
Our models were trained on 4 machine translation datasets of different training sizes:
[itemsep=.5ex,leftmargin=2ex]
IWSLT 2017 German $\rightarrow $ English BIBREF27: 200K sentence pairs.
KFTT Japanese $\rightarrow $ English BIBREF28: 300K sentence pairs.
WMT 2016 Romanian $\rightarrow $ English BIBREF29: 600K sentence pairs.
WMT 2014 English $\rightarrow $ German BIBREF30: 4.5M sentence pairs.
All of these datasets were preprocessed with byte-pair encoding BIBREF31, using joint segmentations of 32k merge operations.
Experiments ::: Training.
We follow the dimensions of the Transformer-Base model of BIBREF0: The number of layers is $L=6$ and number of heads is $H=8$ in the encoder self-attention, the context attention, and the decoder self-attention. We use a mini-batch size of 8192 tokens and warm up the learning rate linearly until 20k steps, after which it decays according to an inverse square root schedule. All models were trained until convergence of validation accuracy, and evaluation was done at each 10k steps for ro$\rightarrow $en and en$\rightarrow $de and at each 5k steps for de$\rightarrow $en and ja$\rightarrow $en. The end-to-end computational overhead of our methods, when compared to standard softmax, is relatively small; in training tokens per second, the models using $\alpha $-entmax and $1.5$-entmax are, respectively, $75\%$ and $90\%$ the speed of the softmax model.
Experiments ::: Results.
We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis. In the next section, we make use of this potential by exploring the learned internal mechanics of the self-attention heads.
Analysis
We conduct an analysis for the higher-resource dataset WMT 2014 English $\rightarrow $ German of the attention in the sparse adaptive Transformer model ($\alpha $-entmax) at multiple levels: we analyze high-level statistics as well as individual head behavior. Moreover, we make a qualitative analysis of the interpretability capabilities of our models.
Analysis ::: High-Level Statistics ::: What kind of @!START@$\alpha $@!END@ values are learned?
Figure FIGREF37 shows the learning trajectories of the $\alpha $ parameters of a selected subset of heads. We generally observe a tendency for the randomly-initialized $\alpha $ parameters to decrease initially, suggesting that softmax-like behavior may be preferable while the model is still very uncertain. After around one thousand steps, some heads change direction and become sparser, perhaps as they become more confident and specialized. This shows that the initialization of $\alpha $ does not predetermine its sparsity level or the role the head will have throughout. In particular, head 8 in the encoder self-attention layer 2 first drops to around $\alpha =1.3$ before becoming one of the sparsest heads, with $\alpha \approx 2$.
The overall distribution of $\alpha $ values at convergence can be seen in Figure FIGREF38. We can observe that the encoder self-attention blocks learn to concentrate the $\alpha $ values in two modes: a very sparse one around $\alpha \rightarrow 2$, and a dense one between softmax and 1.5-entmax . However, the decoder self and context attention only learn to distribute these parameters in a single mode. We show next that this is reflected in the average density of attention weight vectors as well.
Analysis ::: High-Level Statistics ::: Attention weight density when translating.
For any $\alpha >1$, it would still be possible for the weight matrices in Equation DISPLAY_FORM9 to learn re-scalings so as to make attention sparser or denser. To visualize the impact of adaptive $\alpha $ values, we compare the empirical attention weight density (the average number of tokens receiving non-zero attention) within each module, against sparse Transformers with fixed $\alpha =1.5$.
Figure FIGREF40 shows that, with fixed $\alpha =1.5$, heads tend to be sparse and similarly-distributed in all three attention modules. With learned $\alpha $, there are two notable changes: (i) a prominent mode corresponding to fully dense probabilities, showing that our models learn to combine sparse and dense attention, and (ii) a distinction between the encoder self-attention – whose background distribution tends toward extreme sparsity – and the other two modules, who exhibit more uniform background distributions. This suggests that perhaps entirely sparse Transformers are suboptimal.
The fact that the decoder seems to prefer denser attention distributions might be attributed to it being auto-regressive, only having access to past tokens and not the full sentence. We speculate that it might lose too much information if it assigned weights of zero to too many tokens in the self-attention, since there are fewer tokens to attend to in the first place.
Teasing this down into separate layers, Figure FIGREF41 shows the average (sorted) density of each head for each layer. We observe that $\alpha $-entmax is able to learn different sparsity patterns at each layer, leading to more variance in individual head behavior, to clearly-identified dense and sparse heads, and overall to different tendencies compared to the fixed case of $\alpha =1.5$.
Analysis ::: High-Level Statistics ::: Head diversity.
To measure the overall disagreement between attention heads, as a measure of head diversity, we use the following generalization of the Jensen-Shannon divergence:
where $\mathbf {p}_j$ is the vector of attention weights assigned by head $j$ to each word in the sequence, and $\mathsf {H}^\textsc {S}$ is the Shannon entropy, base-adjusted based on the dimension of $\mathbf {p}$ such that $JS \le 1$. We average this measure over the entire validation set. The higher this metric is, the more the heads are taking different roles in the model.
Figure FIGREF44 shows that both sparse Transformer variants show more diversity than the traditional softmax one. Interestingly, diversity seems to peak in the middle layers of the encoder self-attention and context attention, while this is not the case for the decoder self-attention.
The statistics shown in this section can be found for the other language pairs in Appendix SECREF8.
Analysis ::: Identifying Head Specializations
Previous work pointed out some specific roles played by different heads in the softmax Transformer model BIBREF33, BIBREF5, BIBREF9. Identifying the specialization of a head can be done by observing the type of tokens or sequences that the head often assigns most of its attention weight; this is facilitated by sparsity.
Analysis ::: Identifying Head Specializations ::: Positional heads.
One particular type of head, as noted by BIBREF9, is the positional head. These heads tend to focus their attention on either the previous or next token in the sequence, thus obtaining representations of the neighborhood of the current time step. In Figure FIGREF47, we show attention plots for such heads, found for each of the studied models. The sparsity of our models allows these heads to be more confident in their representations, by assigning the whole probability distribution to a single token in the sequence. Concretely, we may measure a positional head's confidence as the average attention weight assigned to the previous token. The softmax model has three heads for position $-1$, with median confidence $93.5\%$. The $1.5$-entmax model also has three heads for this position, with median confidence $94.4\%$. The adaptive model has four heads, with median confidences $95.9\%$, the lowest-confidence head being dense with $\alpha =1.18$, while the highest-confidence head being sparse ($\alpha =1.91$).
For position $+1$, the models each dedicate one head, with confidence around $95\%$, slightly higher for entmax. The adaptive model sets $\alpha =1.96$ for this head.
Analysis ::: Identifying Head Specializations ::: BPE-merging head.
Due to the sparsity of our models, we are able to identify other head specializations, easily identifying which heads should be further analysed. In Figure FIGREF51 we show one such head where the $\alpha $ value is particularly high (in the encoder, layer 1, head 4 depicted in Figure FIGREF37). We found that this head most often looks at the current time step with high confidence, making it a positional head with offset 0. However, this head often spreads weight sparsely over 2-3 neighboring tokens, when the tokens are part of the same BPE cluster or hyphenated words. As this head is in the first layer, it provides a useful service to the higher layers by combining information evenly within some BPE clusters.
For each BPE cluster or cluster of hyphenated words, we computed a score between 0 and 1 that corresponds to the maximum attention mass assigned by any token to the rest of the tokens inside the cluster in order to quantify the BPE-merging capabilities of these heads. There are not any attention heads in the softmax model that are able to obtain a score over $80\%$, while for $1.5$-entmax and $\alpha $-entmax there are two heads in each ($83.3\%$ and $85.6\%$ for $1.5$-entmax and $88.5\%$ and $89.8\%$ for $\alpha $-entmax).
Analysis ::: Identifying Head Specializations ::: Interrogation head.
On the other hand, in Figure FIGREF52 we show a head for which our adaptively sparse model chose an $\alpha $ close to 1, making it closer to softmax (also shown in encoder, layer 1, head 3 depicted in Figure FIGREF37). We observe that this head assigns a high probability to question marks at the end of the sentence in time steps where the current token is interrogative, thus making it an interrogation-detecting head. We also observe this type of heads in the other models, which we also depict in Figure FIGREF52. The average attention weight placed on the question mark when the current token is an interrogative word is $98.5\%$ for softmax, $97.0\%$ for $1.5$-entmax, and $99.5\%$ for $\alpha $-entmax.
Furthermore, we can examine sentences where some tendentially sparse heads become less so, thus identifying sources of ambiguity where the head is less confident in its prediction. An example is shown in Figure FIGREF55 where sparsity in the same head differs for sentences of similar length.
Related Work ::: Sparse attention.
Prior work has developed sparse attention mechanisms, including applications to NMT BIBREF19, BIBREF12, BIBREF20, BIBREF22, BIBREF34. BIBREF14 introduced the entmax function this work builds upon. In their work, there is a single attention mechanism which is controlled by a fixed $\alpha $. In contrast, this is the first work to allow such attention mappings to dynamically adapt their curvature and sparsity, by automatically adjusting the continuous $\alpha $ parameter. We also provide the first results using sparse attention in a Transformer model.
Related Work ::: Fixed sparsity patterns.
Recent research improves the scalability of Transformer-like networks through static, fixed sparsity patterns BIBREF10, BIBREF35. Our adaptively-sparse Transformer can dynamically select a sparsity pattern that finds relevant words regardless of their position (e.g., Figure FIGREF52). Moreover, the two strategies could be combined. In a concurrent line of research, BIBREF11 propose an adaptive attention span for Transformer language models. While their work has each head learn a different contiguous span of context tokens to attend to, our work finds different sparsity patterns in the same span. Interestingly, some of their findings mirror ours – we found that attention heads in the last layers tend to be denser on average when compared to the ones in the first layers, while their work has found that lower layers tend to have a shorter attention span compared to higher layers.
Related Work ::: Transformer interpretability.
The original Transformer paper BIBREF0 shows attention visualizations, from which some speculation can be made of the roles the several attention heads have. BIBREF7 study the syntactic abilities of the Transformer self-attention, while BIBREF6 extract dependency relations from the attention weights. BIBREF8 find that the self-attentions in BERT BIBREF3 follow a sequence of processes that resembles a classical NLP pipeline. Regarding redundancy of heads, BIBREF9 develop a method that is able to prune heads of the multi-head attention module and make an empirical study of the role that each head has in self-attention (positional, syntactic and rare words). BIBREF36 also aim to reduce head redundancy by adding a regularization term to the loss that maximizes head disagreement and obtain improved results. While not considering Transformer attentions, BIBREF18 show that traditional attention mechanisms do not necessarily improve interpretability since softmax attention is vulnerable to an adversarial attack leading to wildly different model predictions for the same attention weights. Sparse attention may mitigate these issues; however, our work focuses mostly on a more mechanical aspect of interpretation by analyzing head behavior, rather than on explanations for predictions.
Conclusion and Future Work
We contribute a novel strategy for adaptively sparse attention, and, in particular, for adaptively sparse Transformers. We present the first empirical analysis of Transformers with sparse attention mappings (i.e., entmax), showing potential in both translation accuracy as well as in model interpretability.
In particular, we analyzed how the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence. Our adaptivity strategy relies only on gradient-based optimization, side-stepping costly per-head hyper-parameter searches. Further speed-ups are possible by leveraging more parallelism in the bisection algorithm for computing $\alpha $-entmax.
Finally, some of the automatically-learned behaviors of our adaptively sparse Transformers – for instance, the near-deterministic positional heads or the subword joining head – may provide new ideas for designing static variations of the Transformer.
Acknowledgments
This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundação para a Ciência e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal). We are grateful to Ben Peters for the $\alpha $-entmax code and Erick Fonseca, Marcos Treviso, Pedro Martins, and Tsvetomila Mihaylova for insightful group discussion. We thank Mathieu Blondel for the idea to learn $\alpha $. We would also like to thank the anonymous reviewers for their helpful feedback.
Supplementary Material
Background ::: Regularized Fenchel-Young prediction functions
Definition 1 (BIBREF23)
Let $\Omega \colon \triangle ^d \rightarrow {\mathbb {R}}\cup \lbrace \infty \rbrace $ be a strictly convex regularization function. We define the prediction function $\mathbf {\pi }_{\Omega }$ as
Background ::: Characterizing the @!START@$\alpha $@!END@-entmax mapping
Lemma 1 (BIBREF14) For any $\mathbf {z}$, there exists a unique $\tau ^\star $ such that
Proof: From the definition of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$,
we may easily identify it with a regularized prediction function (Def. UNKREF81):
We first note that for all $\mathbf {p}\in \triangle ^d$,
From the constant invariance and scaling properties of $\mathbf {\pi }_{\Omega }$ BIBREF23,
Using BIBREF23, noting that $g^{\prime }(t) = t^{\alpha - 1}$ and $(g^{\prime })^{-1}(u) = u^{{1}{\alpha -1}}$, yields
Since $\mathsf {H}^{\textsc {T}}_\alpha $ is strictly convex on the simplex, $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ has a unique solution $\mathbf {p}^\star $. Equation DISPLAY_FORM88 implicitly defines a one-to-one mapping between $\mathbf {p}^\star $ and $\tau ^\star $ as long as $\mathbf {p}^\star \in \triangle $, therefore $\tau ^\star $ is also unique.
Background ::: Connections to softmax and sparsemax
The Euclidean projection onto the simplex, sometimes referred to, in the context of neural attention, as sparsemax BIBREF19, is defined as
The solution can be characterized through the unique threshold $\tau $ such that $\sum _i \operatornamewithlimits{\mathsf {sparsemax}}(\mathbf {z})_i = 1$ and BIBREF38
Thus, each coordinate of the sparsemax solution is a piecewise-linear function. Visibly, this expression is recovered when setting $\alpha =2$ in the $\alpha $-entmax expression (Equation DISPLAY_FORM85); for other values of $\alpha $, the exponent induces curvature.
On the other hand, the well-known softmax is usually defined through the expression
which can be shown to be the unique solution of the optimization problem
where $\mathsf {H}^\textsc {S}(\mathbf {p}) -\sum _i p_i \log p_i$ is the Shannon entropy. Indeed, setting the gradient to 0 yields the condition $\log p_i = z_j - \nu _i - \tau - 1$, where $\tau $ and $\nu > 0$ are Lagrange multipliers for the simplex constraints $\sum _i p_i = 1$ and $p_i \ge 0$, respectively. Since the l.h.s. is only finite for $p_i>0$, we must have $\nu _i=0$ for all $i$, by complementary slackness. Thus, the solution must have the form $p_i = {\exp (z_i)}{Z}$, yielding Equation DISPLAY_FORM92.
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@
Recall that the entmax transformation is defined as:
where $\alpha \ge 1$ and $\mathsf {H}^{\textsc {T}}_{\alpha }$ is the Tsallis entropy,
and $\mathsf {H}^\textsc {S}(\mathbf {p}):= -\sum _j p_j \log p_j$ is the Shannon entropy.
In this section, we derive the Jacobian of $\operatornamewithlimits{\mathsf {entmax }}$ with respect to the scalar parameter $\alpha $.
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: General case of @!START@$\alpha >1$@!END@
From the KKT conditions associated with the optimization problem in Eq. DISPLAY_FORM85, we have that the solution $\mathbf {p}^{\star }$ has the following form, coordinate-wise:
where $\tau ^{\star }$ is a scalar Lagrange multiplier that ensures that $\mathbf {p}^{\star }$ normalizes to 1, i.e., it is defined implicitly by the condition:
For general values of $\alpha $, Eq. DISPLAY_FORM98 lacks a closed form solution. This makes the computation of the Jacobian
non-trivial. Fortunately, we can use the technique of implicit differentiation to obtain this Jacobian.
The Jacobian exists almost everywhere, and the expressions we derive expressions yield a generalized Jacobian BIBREF37 at any non-differentiable points that may occur for certain ($\alpha $, $\mathbf {z}$) pairs. We begin by noting that $\frac{\partial p_i^{\star }}{\partial \alpha } = 0$ if $p_i^{\star } = 0$, because increasing $\alpha $ keeps sparse coordinates sparse. Therefore we need to worry only about coordinates that are in the support of $\mathbf {p}^\star $. We will assume hereafter that the $i$th coordinate of $\mathbf {p}^\star $ is non-zero. We have:
We can see that this Jacobian depends on $\frac{\partial \tau ^{\star }}{\partial \alpha }$, which we now compute using implicit differentiation.
Let $\mathcal {S} = \lbrace i: p^\star _i > 0 \rbrace $). By differentiating both sides of Eq. DISPLAY_FORM98, re-using some of the steps in Eq. DISPLAY_FORM101, and recalling Eq. DISPLAY_FORM97, we get
from which we obtain:
Finally, plugging Eq. DISPLAY_FORM103 into Eq. DISPLAY_FORM101, we get:
where we denote by
The distribution $\tilde{\mathbf {p}}(\alpha )$ can be interpreted as a “skewed” distribution obtained from $\mathbf {p}^{\star }$, which appears in the Jacobian of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ w.r.t. $\mathbf {z}$ as well BIBREF14.
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Solving the indetermination for @!START@$\alpha =1$@!END@
We can write Eq. DISPLAY_FORM104 as
When $\alpha \rightarrow 1^+$, we have $\tilde{\mathbf {p}}(\alpha ) \rightarrow \mathbf {p}^{\star }$, which leads to a $\frac{0}{0}$ indetermination.
To solve this indetermination, we will need to apply L'Hôpital's rule twice. Let us first compute the derivative of $\tilde{p}_i(\alpha )$ with respect to $\alpha $. We have
therefore
Differentiating the numerator and denominator in Eq. DISPLAY_FORM107, we get:
with
and
When $\alpha \rightarrow 1^+$, $B$ becomes again a $\frac{0}{0}$ indetermination, which we can solve by applying again L'Hôpital's rule. Differentiating the numerator and denominator in Eq. DISPLAY_FORM112:
Finally, summing Eq. DISPLAY_FORM111 and Eq. DISPLAY_FORM113, we get
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Summary
To sum up, we have the following expression for the Jacobian of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ with respect to $\alpha $:
|
How does their model improve interpretability compared to softmax transformers?
|
the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence
| 4,902
|
qasper
|
8k
|
Introduction
Speech-to-Text translation (ST) is essential for a wide range of scenarios: for example in emergency calls, where agents have to respond emergent requests in a foreign language BIBREF0; or in online courses, where audiences and speakers use different languages BIBREF1. To tackle this problem, existing approaches can be categorized into cascaded method BIBREF2, BIBREF3, where a machine translation (MT) model translates outputs of an automatic speech recognition (ASR) system into target language, and end-to-end method BIBREF4, BIBREF5, where a single model learns acoustic frames to target word sequence mappings in one step towards the final objective of interest. Although the cascaded model remains the dominant approach due to its better performance, the end-to-end method becomes more and more popular because it has lower latency by avoiding inferences with two models and rectifies the error propagation in theory.
Since it is hard to obtain a large-scale ST dataset, multi-task learning BIBREF5, BIBREF6 and pre-training techniques BIBREF7 have been applied to end-to-end ST model to leverage large-scale datasets of ASR and MT. A common practice is to pre-train two encoder-decoder models for ASR and MT respectively, and then initialize the ST model with the encoder of the ASR model and the decoder of the MT model. Subsequently, the ST model is optimized with the multi-task learning by weighing the losses of ASR, MT, and ST. This approach, however, causes a huge gap between pre-training and fine-tuning, which are summarized into three folds:
Subnet Waste: The ST system just reuses the ASR encoder and the MT decoder, while discards other pre-trained subnets, such as the MT encoder. Consequently, valuable semantic information captured by the MT encoder cannot be inherited by the final ST system.
Role Mismatch: The speech encoder plays different roles in pre-training and fine-tuning. The encoder is a pure acoustic model in pre-training, while it has to extract semantic and linguistic features additionally in fine-tuning, which significantly increases the learning difficulty.
Non-pre-trained Attention Module: Previous work BIBREF6 trains attention modules for ASR, MT and ST respectively, hence, the attention module of ST does not benefit from the pre-training.
To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN), which is able to reuse all subnets in pre-training, keep the roles of subnets consistent, and pre-train the attention module. Concretely, the TCEN consists of three components, a speech encoder, a text encoder, and a target text decoder. Different from the previous work that pre-trains an encoder-decoder based ASR model, we only pre-train an ASR encoder by optimizing the Connectionist Temporal Classification (CTC) BIBREF8 objective function. In this way, the additional decoder of ASR is not required while keeping the ability to read acoustic features into the source language space by the speech encoder. Besides, the text encoder and decoder can be pre-trained on a large MT dataset. After that, we employ common used multi-task learning method to jointly learn ASR, MT and ST tasks.
Compared to prior works, the encoder of TCEN is a concatenation of an ASR encoder and an MT encoder and our model does not have an ASR decoder, so the subnet waste issue is solved. Furthermore, the two encoders work at tandem, disentangling acoustic feature extraction and linguistic feature extraction, ensuring the role consistency between pre-training and fine-tuning. Moreover, we reuse the pre-trained MT attention module in ST, so we can leverage the alignment information learned in pre-training.
Since the text encoder consumes word embeddings of plausible texts in MT task but uses speech encoder outputs in ST task, another question is how one guarantees the speech encoder outputs are consistent with the word embeddings. We further modify our model to achieve semantic consistency and length consistency. Specifically, (1) the projection matrix at the CTC classification layer for ASR is shared with the word embedding matrix, ensuring that they are mapped to the same latent space, and (2) the length of the speech encoder output is proportional to the length of the input frame, so it is much longer than a natural sentence. To bridge the length gap, source sentences in MT are lengthened by adding word repetitions and blank tokens to mimic the CTC output sequences.
We conduct comprehensive experiments on the IWSLT18 speech translation benchmark BIBREF1, demonstrating the effectiveness of each component. Our model is significantly better than previous methods by 3.6 and 2.2 BLEU scores for the subword-level decoding and character-level decoding strategies, respectively.
Our contributions are three-folds: 1) we shed light on why previous ST models cannot sufficiently utilize the knowledge learned from the pre-training process; 2) we propose a new ST model, which alleviates shortcomings in existing methods; and 3) we empirically evaluate the proposed model on a large-scale public dataset.
Background ::: Problem Formulation
End-to-end speech translation aims to translate a piece of audio into a target-language translation in one step. The raw speech signals are usually converted to sequences of acoustic features, e.g. Mel filterbank features. Here, we define the speech feature sequence as $\mathbf {x} = (x_1, \cdots , x_{T_x})$.The transcription and translation sequences are denoted as $\mathbf {y^{s}} = (y_1^{s}, \cdots , y_{T_s}^{s})$, and $\mathbf {y^{t}} = (y_1^{t}, \cdots , y_{T_t}^{t})$ repectively. Each symbol in $\mathbf {y^{s}}$ or $\mathbf {y^{t}}$ is an integer index of the symbol in a vocabulary $V_{src}$ or $V_{trg}$ respectively (e.g. $y^s_i=k, k\in [0, |V_{src}|-1]$). In this work, we suppose that an ASR dataset, an MT dataset, and a ST dataset are available, denoted as $\mathcal {A} = \lbrace (\mathbf {x_i}, \mathbf {y^{s}_i})\rbrace _{i=0}^I$, $\mathcal {M} =\lbrace (\mathbf {y^{s}_j}, \mathbf {y^{t}_j})\rbrace _{j=0}^J$ and $ \mathcal {S} =\lbrace (\mathbf {x_l}, \mathbf {y^{t}_l})\rbrace _{l=0}^L$ respectively. Given a new piece of audio $\mathbf {x}$, our goal is to learn an end to end model to generate a translation sentence $\mathbf {y^{t}}$ without generating an intermediate result $\mathbf {y^{s}}$.
Background ::: Multi-Task Learning and Pre-training for ST
To leverage large scale ASR and MT data, multi-task learning and pre-training techniques are widely employed to improve the ST system. As shown in Figure FIGREF4, there are three popular multi-task strategies for ST, including 1) one-to-many setting, in which a speech encoder is shared between ASR and ST tasks; 2) many-to-one setting in which a decoder is shared between MT and ST tasks; and 3) many-to-many setting where both the encoder and decoder are shared.
A many-to-many multi-task model contains two encoders as well as two decoders. It can be jointly trained on ASR, MT, and ST tasks. As the attention module is task-specific, three attentions are defined.
Usually, the size of $\mathcal {A}$ and $\mathcal {M}$ is much larger than $\mathcal {S}$. Therefore, the common training practice is to pre-train the model on ASR and MT tasks and then fine-tune it with a multi-task learning manner. However, as aforementioned, this method suffers from subnet waste, role mismatch and non-pre-trained attention issues, which severely limits the end-to-end ST performance.
Our method
In this section, we first introduce the architecture of TCEN, which consists of two encoders connected in tandem, and one decoder with an attention module. Then we give the pre-training and fine-tuning strategy for TCEN. Finally, we propose our solutions for semantic and length inconsistency problems, which are caused by multi-task learning.
Our method ::: TCEN Architecture
Figure FIGREF5 sketches the overall architecture of TCEN, including a speech encoder $enc_s$, a text encoder $enc_t$ and a decoder $dec$ with an attention module $att$. During training, the $enc_s$ acts like an acoustic model which reads the input $\mathbf {x}$ to word or subword representations $\mathbf {h^s}$, then $enc_t$ learns high-level linguistic knowledge into hidden representations $\mathbf {h^t}$. Finally, the $dec$ defines a distribution probability over target words. The advantage of our architecture is that two encoders disentangle acoustic feature extraction and linguistic feature extraction, making sure that valuable knowledge learned from ASR and MT tasks can be effectively leveraged for ST training. Besides, every module in pre-training can be utilized in fine-tuning, alleviating the subnet waste problem.
Follow BIBREF9 inaguma2018speech, we use CNN-BiLSTM architecture to build our model. Specifically, the input features $\mathbf {x}$ are organized as a sequence of feature vectors in length $T_x$. Then, $\mathbf {x}$ is passed into a stack of two convolutional layers followed by max-pooling:
where $\mathbf {v}^{(l-1)}$ is feature maps in last layer and $\mathbf {W}^{(l)}$ is the filter. The max-pooling layers downsample the sequence in length by a total factor of four. The down-sampled feature sequence is further fed into a stack of five bidirectional $d$-dimensional LSTM layers:
where $[;]$ denotes the vector concatenation. The final output representation from the speech encoder is denoted as $\mathbf {h^s}=(h^s_1, \cdots , h^s_{\frac{T_x}{4}})$, where $h_i^s \in \mathbb {R}^d$.
The text encoder $enc_t$ consists of two bidirectional LSTM layers. In ST task, $enc_t$ accepts speech encoder output $\mathbf {h}^s$ as input. While in MT, $enc_t$ consumes the word embedding representation $\mathbf {e^s}$ derived from $\mathbf {y^s}$, where each element $e^s_i$ is computed by choosing the $y_i^s$-th vector from the source embedding matrix $W_{E^s}$. The goal of $enc_t$ is to extract high-level linguistic features like syntactic features or semantic features from lower level subword representations $\mathbf {h}^s$ or $\mathbf {e}^s$. Since $\mathbf {h}^s$ and $\mathbf {e}^s$ belong to different latent space and have different lengths, there remain semantic and length inconsistency problems. We will provide our solutions in Section SECREF21. The output sequence of $enc_t$ is denoted as $\mathbf {h}^t$.
The decoder is defined as two unidirectional LSTM layers with an additive attention $att$. It predicts target sequence $\mathbf {y^{t}}$ by estimating conditional probability $P(\mathbf {y^{t}}|\mathbf {x})$:
Here, $z_k$ is the the hidden state of the deocder RNN at $k$ step and $c_k$ is a time-dependent context vector computed by the attention $att$.
Our method ::: Training Procedure
Following previous work, we split the training procedure to pre-training and fine-tuning stages. In pre-training stage, the speech encoder $enc_s$ is trained towards CTC objective using dataset $\mathcal {A}$, while the text encoder $enc_t$ and the decoder $dec$ are trained on MT dataset $\mathcal {M}$. In fine-tuning stage, we jointly train the model on ASR, MT, and ST tasks.
Our method ::: Training Procedure ::: Pre-training
To sufficiently utilize the large dataset $\mathcal {A}$ and $\mathcal {M}$, the model is pre-trained on CTC-based ASR task and MT task in the pre-training stage.
For ASR task, in order to get rid of the requirement for decoder and enable the $enc_s$ to generate subword representation, we leverage connectionist temporal classification (CTC) BIBREF8 loss to train the speech encoder.
Given an input $\mathbf {x}$, $enc_s$ emits a sequence of hidden vectors $\mathbf {h^s}$, then a softmax classification layer predicts a CTC path $\mathbf {\pi }$, where $\pi _t \in V_{src} \cup $ {`-'} is the observing label at particular RNN step $t$, and `-' is the blank token representing no observed labels:
where $W_{ctc} \in \mathbb {R}^{d \times (|V_{src}|+1)}$ is the weight matrix in the classification layer and $T$ is the total length of encoder RNN.
A legal CTC path $\mathbf {\pi }$ is a variation of the source transcription $\mathbf {y}^s$ by allowing occurrences of blank tokens and repetitions, as shown in Table TABREF14. For each transcription $\mathbf {y}$, there exist many legal CTC paths in length $T$. The CTC objective trains the model to maximize the probability of observing the golden sequence $\mathbf {y}^s$, which is calculated by summing the probabilities of all possible legal paths:
where $\Phi _T(y)$ is the set of all legal CTC paths for sequence $\mathbf {y}$ with length $T$. The loss can be easily computed using forward-backward algorithm. More details about CTC are provided in supplementary material.
For MT task, we use the cross-entropy loss as the training objective. During training, $\mathbf {y^s}$ is converted to embedding vectors $\mathbf {e^s}$ through embedding layer $W_{E^s}$, then $enc_t$ consumes $\mathbf {e^s}$ and pass the output $\mathbf {h^t}$ to decoder. The objective function is defined as:
Our method ::: Training Procedure ::: Fine-tune
In fine-tune stage, we jointly update the model on ASR, MT, and ST tasks. The training for ASR and MT follows the same process as it was in pre-training stage.
For ST task, the $enc_s$ reads the input $\mathbf {x}$ and generates $\mathbf {h^s}$, then $enc_t$ learns high-level linguistic knowledge into $\mathbf {h^t}$. Finally, the $dec$ predicts the target sentence. The ST loss function is defined as:
Following the update strategy proposed by BIBREF11 luong2015multi, we allocate a different training ratio $\alpha _i$ for each task. When switching between tasks, we select randomly a new task $i$ with probability $\frac{\alpha _i}{\sum _{j}\alpha _{j}}$.
Our method ::: Subnet-Consistency
Our model keeps role consistency between pre-training and fine-tuning by connecting two encoders for ST task. However, this leads to some new problems: 1) The text encoder consumes $\mathbf {e^s}$ during MT training, while it accepts $\mathbf {h^s}$ during ST training. However, $\mathbf {e^s}$ and $\mathbf {h^s}$ may not follow the same distribution, resulting in the semantic inconsistency. 2) Besides, the length of $\mathbf {h^s}$ is not the same order of magnitude with the length of $\mathbf {e^s}$, resulting in the length inconsistency.
In response to the above two challenges, we propose two countermeasures: 1) We share weights between CTC classification layer and source-end word embedding layer during training of ASR and MT, encouraging $\mathbf {e^s}$ and $\mathbf {h^s}$ in the same space. 2)We feed the text encoder source sentences in the format of CTC path, which are generated from a seq2seq model, making it more robust toward long inputs.
Our method ::: Subnet-Consistency ::: Semantic Consistency
As shown in Figure FIGREF5, during multi-task training, two different hidden features will be fed into the text encoder $enc_t$: the embedding representation $\mathbf {e}^s$ in MT task, and the $enc_s$ output $\mathbf {h^s}$ in ST task. Without any regularization, they may belong to different latent spaces. Due to the space gap, the $enc_t$ has to compromise between two tasks, limiting its performance on individual tasks.
To bridge the space gap, our idea is to pull $\mathbf {h^s}$ into the latent space where $\mathbf {e}^s$ belong. Specifically, we share the weight $W_{ctc}$ in CTC classification layer with the source embedding weights $W_{E^s}$, which means $W_{ctc} = W_{E^s}$. In this way, when predicting the CTC path $\mathbf {\pi }$, the probability of observing the particular label $w_i \in V_{src}\cup ${`-'} at time step $t$, $p(\pi _t=w_i|\mathbf {x})$, is computed by normalizing the product of hidden vector $h_t^s$ and the $i$-th vector in $W_{E^s}$:
The loss function closes the distance between $h^s_t$ and golden embedding vector, encouraging $\mathbf {h}^s$ have the same distribution with $\mathbf {e}^s$.
Our method ::: Subnet-Consistency ::: Length Consistency
Another existing problem is length inconsistency. The length of the sequence $\mathbf {h^s}$ is proportional to the length of the input frame $\mathbf {x}$, which is much longer than the length of $\mathbf {e^s}$. To solve this problem, we train an RNN-based seq2seq model to transform normal source sentences to noisy sentences in CTC path format, and replace standard MT with denoising MT for multi-tasking.
Specifically, we first train a CTC ASR model based on dataset $\mathcal {A} = \lbrace (\mathbf {x}_i, \mathbf {y}^s_i)\rbrace _{i=0}^{I}$, and generate a CTC-path $\mathbf {\pi }_i$ for each audio $\mathbf {x}_i$ by greedy decoding. Then we define an operation $S(\cdot )$, which converts a CTC path $\mathbf {\pi }$ to a sequence of the unique tokens $\mathbf {u}$ and a sequence of repetition times for each token $\mathbf {l}$, denoted as $S(\mathbf {\pi }) = (\mathbf {u}, \mathbf {l})$. Notably, the operation is reversible, meaning that $S^{-1} (\mathbf {u}, \mathbf {l})=\mathbf {\pi }$. We use the example $\mathbf {\pi _1}$ in Table TABREF14 and show the corresponding $\mathbf {u}$ and $\mathbf {l}$ in Table TABREF24.
Then we build a dataset $\mathcal {P} = \lbrace (\mathbf {y^s}_i, \mathbf {u}_i, \mathbf {l}_i)\rbrace _{i=0}^{I}$ by decoding all the audio pieces in $\mathcal {A}$ and transform the resulting path by the operation $S(\cdot )$. After that, we train a seq2seq model, as shown in Figure FIGREF25, which takes $ \mathbf {y^s}_i$ as input and decodes $\mathbf {u}_i, \mathbf {l}_i$ as outputs. With the seq2seq model, a noisy MT dataset $\mathcal {M}^{\prime }=\lbrace (\mathbf {\pi }_l, \mathbf {y^t}_l)\rbrace _{l=0}^{L}$ is obtained by converting every source sentence $\mathbf {y^s}_i \in \mathcal {M}$ to $\mathbf {\pi _i}$, where $\mathbf {\pi }_i = S^{-1}(\mathbf {u}_i, \mathbf {l}_i)$. We did not use the standard seq2seq model which takes $\mathbf {y^s}$ as input and generates $\mathbf {\pi }$ directly, since there are too many blank tokens `-' in $\mathbf {\pi }$ and the model tends to generate a long sequence with only blank tokens. During MT training, we randomly sample text pairs from $\mathcal {M}^{\prime }$ and $\mathcal {M}$ according to a hyper-parameter $k$. After tuning on the validation set, about $30\%$ pairs are sampled from $\mathcal {M}^{\prime }$. In this way, the $enc_t$ is more robust toward the longer inputs given by the $enc_s$.
Experiments
We conduct experiments on the IWSLT18 speech translation task BIBREF1. Since IWSLT participators use different data pre-processing methods, we reproduce several competitive baselines based on the ESPnet BIBREF12 for a fair comparison.
Experiments ::: Dataset ::: Speech translation data:
The organizer provides a speech translation corpus extracting from the TED talk (ST-TED), which consists of raw English wave files, English transcriptions, and aligned German translations. The corpus contains 272 hours of English speech with 171k segments. We split 2k segments from the corpus as dev set and tst2010, tst2013, tst2014, tst2015 are used as test sets.
Speech recognition data: Aside from ST-TED, TED-LIUM2 corpus BIBREF13 is provided as speech recognition data, which contains 207 hours of English speech and 93k transcript sentences.
Text translation data: We use transcription and translation pairs in the ST-TED corpus and WIT3 as in-domain MT data, which contains 130k and 200k sentence pairs respectively. WMT2018 is used as out-of-domain training data which consists of 41M sentence pairs.
Data preprocessing: For speech data, the utterances are segmented into multiple frames with a 25 ms window size and a 10 ms step size. Then we extract 80-channel log-Mel filter bank and 3-dimensional pitch features using Kaldi BIBREF14, resulting in 83-dimensional input features. We normalize them by the mean and the standard deviation on the whole training set. The utterances with more than 3000 frames are discarded. The transcripts in ST-TED are in true-case with punctuation while in TED-LIUM2, transcripts are in lower-case and unpunctuated. Thus, we lowercase all the sentences and remove the punctuation to keep consistent. To increase the amount of training data, we perform speed perturbation on the raw signals with speed factors 0.9 and 1.1. For the text translation data, sentences longer than 80 words or shorter than 10 words are removed. Besides, we discard pairs whose length ratio between source and target sentence is smaller than 0.5 or larger than 2.0. Word tokenization is performed using the Moses scripts and both English and German words are in lower-case.
We use two different sets of vocabulary for our experiments. For the subword experiments, both English and German vocabularies are generated using sentencepiece BIBREF15 with a fixed size of 5k tokens. BIBREF9 inaguma2018speech show that increasing the vocabulary size is not helpful for ST task. For the character experiments, both English and German sentences are represented in the character level.
For evaluation, we segment each audio with the LIUM SpkDiarization tool BIBREF16 and then perform MWER segmentation with RWTH toolkit BIBREF17. We use lowercase BLEU as evaluation metric.
Experiments ::: Baseline Models and Implementation
We compare our method with following baselines.
Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.
Pre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.
Multi-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\alpha _{st}=0.75$ while $\alpha _{asr}=0.25$ or $\alpha _{mt}=0.25$. For many-to-many setting, we use $\alpha _{st}=0.6, \alpha _{asr}=0.2$ and $\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.
Many-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner.
All our baselines as well as TCEN are implemented based on ESPnet BIBREF12, the RNN size is set as $d=1024$ for all models. We use a dropout of 0.3 for embeddings and encoders, and train using Adadelta with initial learning rate of 1.0 for a maximum of 10 epochs.
For training of TCEN, we set $\alpha _{asr}=0.2$ and $\alpha _{mt}=0.8$ in the pre-training stage, since the MT dataset is much larger than ASR dataset. For fine-tune, we use $\alpha _{st}=0.6, \alpha _{asr}=0.2$ and $\alpha _{mt}=0.2$, same as the `many-to-many' baseline.
For testing, we select the model with the best accuracy on speech translation task on dev set. At inference time, we use a beam size of 10, and the beam scores include length normalization with a weight of 0.2.
Experiments ::: Experimental Results
Table TABREF29 shows the results on four test sets as well as the average performance. Our method significantly outperforms the strong `many-to-many+pretrain' baseline by 3.6 and 2.2 BLEU scores respectively, indicating the proposed method is very effective that substantially improves the translation quality. Besides, both pre-training and multi-task learning can improve translation quality, and the pre-training settings (2nd-4th rows) are more effective compared to multi-task settings (5th-8th rows). We observe a performance degradation in the `triangle+pretrain' baseline. Compared to our method, where the decoder receives higher-level syntactic and semantic linguistic knowledge extracted from text encoder, their ASR decoder can only provide lower word-level linguistic information. Besides, since their model lacks text encoder and the architecture of ST decoder is different from MT decoder, their model cannot utilize the large-scale MT data in all the training stages. Interestingly, we find that the char-level models outperform the subword-level models in all settings, especially in vanilla baseline. A similar phenomenon is observed by BIBREF6 berard2018end. A possible explanation is that learning the alignments between speech frames and subword units in another language is notoriously difficult. Our method can bring more gains in the subword setting since our model is good at learning the text-to-text alignment and the subword-level alignment is more helpful to the translation quality.
Experiments ::: Discussion ::: Ablation Study
To better understand the contribution of each component, we perform an ablation study on subword-level experiments. The results are shown in Table TABREF37. In `-MT noise' setting, we do not add noise to source sentences for MT. In `-weight sharing' setting, we use different parameters in CTC classification layer and source embedding layer. These two experiments prove that both weight sharing and using noisy MT input benefit to the final translation quality. Performance degrades more in `-weight sharing', indicating the semantic consistency contributes more to our model. In the `-pretrain' experiment, we remove the pre-training stage and directly update the model on three tasks, leading to a dramatic decrease on BLEU score, indicating the pre-training is an indispensable step for end-to-end ST.
Experiments ::: Discussion ::: Learning Curve
It is interesting to investigate why our method is superior to baselines. We find that TCEN achieves a higher final result owing to a better start-point in fine-tuning. Figure FIGREF39 provides learning curves of subword accuracy on validation set. The x-axis denotes the fine-tuning training steps. The vanilla model starts at a low accuracy, because its networks are not pre-trained on the ASR and MT data. The trends of our model and `many-to-many+pretrain' are similar, but our model outperforms it about five points in the whole fine-tuning process. It indicates that the gain comes from bridging the gap between pre-training and fine-tuning rather than a better fine-tuning process.
Experiments ::: Discussion ::: Compared with a Cascaded System
Table TABREF29 compares our model with end-to-end baselines. Here, we compare our model with cascaded systems. We build a cascaded system by combining the ASR model and MT model used in pre-training baseline. Word error rate (WER) of the ASR system and BLEU score of the MT system are reported in the supplementary material. In addition to a simple combination of the ASR and MT systems, we also re-segment the ASR outputs before feeding to the MT system, denoted as cascaded+re-seg. Specifically, we train a seq2seq model BIBREF19 on the MT dataset, where the source side is a no punctuation sentence and the target side is a natural sentence. After that, we use the seq2seq model to add sentence boundaries and punctuation on ASR outputs. Experimental results are shown in Table TABREF41. Our end-to-end model outperforms the simple cascaded model over 2 BLEU scores, and it achieves a comparable performance with the cascaded model combining with a sentence re-segment model.
Related Work
Early works conduct speech translation in a pipeline manner BIBREF2, BIBREF20, where the ASR output lattices are fed into an MT system to generate target sentences. HMM BIBREF21, DenseNet BIBREF22, TDNN BIBREF23 are commonly used ASR systems, while RNN with attention BIBREF19 and Transformer BIBREF10 are top choices for MT. To enhance the robustness of the NMT model towards ASR errors, BIBREF24 DBLP:conf/eacl/TsvetkovMD14 and BIBREF25 DBLP:conf/asru/ChenHHL17 propose to simulate the noise in training and inference.
To avoid error propagation and high latency issues, recent works propose translating the acoustic speech into text in target language without yielding the source transcription BIBREF4. Since ST data is scarce, pre-training BIBREF7, multi-task learning BIBREF4, BIBREF6, curriculum learning BIBREF26, attention-passing BIBREF27, and knowledge distillation BIBREF28, BIBREF29 strategies have been explored to utilize ASR data and MT data. Specifically, BIBREF5 DBLP:conf/interspeech/WeissCJWC17 show improvements of performance by training the ST model jointly with the ASR and the MT model. BIBREF6 berard2018end observe faster convergence and better results due to pre-training and multi-task learning on a larger dataset. BIBREF7 DBLP:conf/naacl/BansalKLLG19 show that pre-training a speech encoder on one language can improve ST quality on a different source language. All of them follow the traditional multi-task training strategies. BIBREF26 DBLP:journals/corr/abs-1802-06003 propose to use curriculum learning to improve ST performance on syntactically distant language pairs. To effectively leverage transcriptions in ST data, BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 augment the multi-task model where the target decoder receives information from the source decoder and they show improvements on low-resource speech translation. Their model just consumes ASR and ST data, in contrast, our work sufficiently utilizes the large-scale MT data to capture the rich semantic knowledge. BIBREF30 DBLP:conf/icassp/JiaJMWCCALW19 use pre-trained MT and text-to-speech (TTS) synthesis models to convert weakly supervised data into ST pairs and demonstrate that an end-to-end MT model can be trained using only synthesised data.
Conclusion
This paper has investigated the end-to-end method for ST. It has discussed why there is a huge gap between pre-training and fine-tuning in previous methods. To alleviate these issues, we have proposed a method, which is capable of reusing every sub-net and keeping the role of sub-net consistent between pre-training and fine-tuning. Empirical studies have demonstrated that our model significantly outperforms baselines.
|
What is the attention module pretrained on?
|
the model is pre-trained on CTC-based ASR task and MT task in the pre-training stage.
| 4,656
|
qasper
|
8k
|
Introduction
Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).
Recent successes in QA, driven largely by the creation of new resources BIBREF2, BIBREF3, BIBREF4, BIBREF5 and advances in model pre-training BIBREF6, BIBREF7, raise a natural question: do state-of-the-art multiple-choice QA (MCQA) models that excel at standard tasks really have basic knowledge and reasoning skills?
Most existing MCQA datasets are constructed through either expensive crowd-sourcing BIBREF8 or hand engineering effort, in the former case making it possible to collect large amounts of data at the cost of losing systematic control over the semantics of the target questions. Hence, doing a controlled experiment to answer such a question for QA is difficult given a lack of targeted challenge datasets.
Having definitive empirical evidence of model competence on any given phenomenon requires constructing a wide range of systematic tests. For example, in measuring competence of definitions, not only do we want to see that the model can handle individual questions such as Figure FIGREF1.1 inside of benchmark tasks, but that it can answer a wider range of questions that exhaustively cover a broad set of concepts and question perturbations (i.e., systematic adjustments to how the questions are constructed). The same applies to ISA reasoning; not only is it important to recognize in the question in Figure FIGREF1.1 that cooking is a learned behavior, but also that cooking is a general type of behavior or, through a few more inferential steps, a type of human activity.
In this paper, we look at systematically constructing such tests by exploiting the vast amounts of structured information contained in various types of expert knowledge such as knowledge graphs and lexical taxonomies. Our general methodology works as illustrated in Figure FIGREF1: given any MCQA model trained on a set of benchmark tasks, we systematically generate a set of synthetic dataset probes (i.e., MCQA renderings of the target information) from information in expert knowledge sources. We then use these probes to ask two empirical questions: 1) how well do models trained on benchmark tasks perform on these probing tasks and; 2) can such models be re-trained to master new challenges with minimal performance loss on their original tasks?
While our methodology is amenable to any knowledge source and set of models/benchmark tasks, we focus on probing state-of-the-art transformer models BIBREF7, BIBREF9 in the domain of science MCQA. For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources. We devise probes that measure model competence in definition and taxonomic knowledge in different settings (including hypernymy, hyponymy, and synonymy detection, and word sense disambiguation). This choice is motivated by fact that the science domain is considered particularly challenging for QA BIBREF10, BIBREF11, BIBREF12, and existing science benchmarks are known to involve widespread use of such knowledge (see BIBREF1, BIBREF13 for analysis), which is also arguably fundamental to more complex forms of reasoning.
We show that accurately probing QA models via synthetic datasets is not straightforward, as unexpected artifacts can easily arise in such data. This motivates our carefully constructed baselines and close data inspection to ensure probe quality.
Our results confirm that transformer-based QA models have a remarkable ability to recognize certain types of knowledge captured in our probes—even without additional fine-tuning. Such models can even outperform strong task-specific models trained directly on our probing tasks (e.g., on definitions, our best model achieves 77% test accuracy without specialized training, as opposed to 51% for a task-specific LSTM-based model). We also show that the same models can be effectively re-fine-tuned on small samples (even 100 examples) of probe data, and that high performance on the probes tends to correlate with a smaller drop in the model's performance on the original QA task.
Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning.
Related Work
We follow recent work on constructing challenge datasets for probing neural models, which has primarily focused on the task of natural language inference (NLI) BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18. Most of this work looks at constructing data through adversarial generation methods, which have also been found useful for creating stronger models BIBREF19. There has also been work on using synthetic data of the type we consider in this paper BIBREF20, BIBREF21, BIBREF22. We closely follow the methodology of BIBREF22, who use hand-constructed linguistic fragments to probe NLI models and study model re-training using a variant of the inoculation by fine-tuning strategy of BIBREF23. In contrast, we focus on probing open-domain MCQA models (see BIBREF24 for a related study in the reading comprehension setting) as well as constructing data from much larger sources of structured knowledge.
Our main study focuses on probing the BERT model and fine-tuning approach of BIBREF7, and other variants thereof, which are all based on the transformer architecture of BIBREF25. Related to our efforts, there have been recent studies into the types of relational knowledge contained in large-scale knowledge models BIBREF26, BIBREF27, BIBREF28, which, similar to our work, probe models using structured knowledge sources. This prior work, however, primarily focuses on unearthing the knowledge contained in the underlying language models as is without further training, using simple (single token) cloze-style probing tasks and templates (similar to what we propose in Section SECREF3). In contrast, we focus on understanding the knowledge contained in language models after they have been trained for a QA end-task using benchmark datasets in which such knowledge is expected to be widespread. Further, our evaluation is done before and after these models are fine-tuned on our probe QA tasks, using a more complex set of QA templates and target inferences.
The use of lexical resources and knowledge graphs such as WordNet to construct datasets has a long history, and has recently appeared in work on adversarial attacks BIBREF14, BIBREF29 and general task construction BIBREF30, BIBREF31. In the area of MCQA, there is related work on constructing questions from tuples BIBREF32, BIBREF3, both of which involve standard crowd annotation to elicit question-answer pairs (see also BIBREF33, BIBREF34). In contrast to this work, we focus on generating data in an entirely automatic fashion, which obviates the need for expensive annotation and gives us the flexibility to construct much larger datasets that control a rich set of semantic aspects of the target questions.
Dataset Probes and Construction
Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\textbf {q}$ and a set of answer choices or candidates $\lbrace a_{1},...a_{N}\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.
For convenience, we will describe each source of expert knowledge as a directed, edge-labeled graph $G$. The nodes of this graph are $\mathcal {V} = \mathcal {C} \cup \mathcal {W} \cup \mathcal {S} \cup \mathcal {D}$, where $\mathcal {C}$ is a set of atomic concepts, $\mathcal {W}$ a set of words, $\mathcal {S}$ a set of sentences, and $\mathcal {D}$ a set of definitions (see Table TABREF4 for details for WordNet and GCIDE). Each edge of $G$ is directed from an atomic concept in $\mathcal {C}$ to another node in $V$, and is labeled with a relation, such as hypernym or isa$^\uparrow $, from a set of relations $\mathcal {R}$ (see Table TABREF4).
When defining our probe question templates, it will be useful to view $G$ as a set of (relation, source, target) triples $\mathcal {T} \subseteq \mathcal {R} \times \mathcal {C} \times \mathcal {V}$. Due to their origin in an expert knowledge source, such triples preserve semantic consistency. For instance, when the relation in a triple is def, the corresponding edge maps a concept in $\mathcal {C}$ to a definition in $\mathcal {D}$.
To construct probe datasets, we rely on two heuristic functions, defined below for each individual probe: $\textsc {gen}_{\mathcal {Q}}(\tau )$, which generates gold question-answer pairs $(\textbf {q},\textbf {a})$ from a set of triples $\tau \subseteq \mathcal {T}$ and question templates $\mathcal {Q}$, and $\textsc {distr}(\tau ^{\prime })$, which generates distractor answers choices $\lbrace a^{\prime }_{1},...a^{\prime }_{N-1} \rbrace $ based on another set of triples $\tau ^{\prime }$ (where usually $\tau \subset \tau ^{\prime }$). For brevity, we will use $\textsc {gen}(\tau )$ to denote $\textsc {gen}_{\mathcal {Q}}(\tau )$, leaving question templates $\mathcal {Q}$ implicit.
Dataset Probes and Construction ::: WordNetQA
WordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\mathcal {D}$) and example sentences ($\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe.
Dataset Probes and Construction ::: WordNetQA ::: Example Generation @!START@$\textsc {gen}(\tau )$@!END@.
We build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\uparrow $), hyponymy (ISA$^{\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.
For example, suppose we wish to create a question $\textbf {q}$ about the definition of a target concept $c \in \mathcal {C}$. We first select a question template from $\mathcal {Q}$ that first introduces the concept $c$ and its lemma $l \in \mathcal {W}$ in context using the example sentence $s \in \mathcal {S}$, and then asks to identify the corresponding WordNet gloss $d \in \mathcal {D}$, which serves as the gold answer $\textbf {a}$. The same is done for ISA reasoning; each question about a hypernym/hyponym relation between two concepts $c \rightarrow ^{\uparrow /\downarrow } c^{\prime } \in \mathcal {T}_{i}$ (e.g., $\texttt {dog} \rightarrow ^{\uparrow /\downarrow } \texttt {animal/terrier}$) first introduces a context for $c$ and then asks for an answer that identifies $c^{\prime }$ (which is also provided with a gloss so as to contain all available context).
In the latter case, the rules $(\texttt {isa}^{r},c,c^{\prime }) \in \mathcal {T}_i$ in Table TABREF8 cover only direct ISA links from $c$ in direction $r \in \lbrace \uparrow ,\downarrow \rbrace $. In practice, for each $c$ and direction $r$, we construct tests that cover the set HOPS$(c,r)$ of all direct as well as derived ISA relations of $c$:
This allows us to evaluate the extent to which models are able to handle complex forms of reasoning that require several inferential steps or hops.
Dataset Probes and Construction ::: WordNetQA ::: Distractor Generation: @!START@$\textsc {distr}(\tau ^{\prime })$@!END@.
An example of how distractors are generated is shown in Figure FIGREF6, which relies on similar principles as above. For each concept $c$, we choose 4 distractor answers that are close in the WordNet semantic space. For example, when constructing hypernymy tests for $c$ from the set hops$(c,\uparrow )$, we build distractors by drawing from $\textsc {hops}(c,\downarrow )$ (and vice versa), as well as from the $\ell $-deep sister family of $c$, defined as follows. The 1-deep sister family is simply $c$'s siblings or sisters, i.e., the other children $\tilde{c} \ne c$ of the parent node $c^{\prime }$ of $c$. For $\ell > 1$, the $\ell $-deep sister family also includes all descendants of each $\tilde{c}$ up to $\ell -1$ levels deep, denoted $\textsc {hops}_{\ell -1}(\tilde{c},\downarrow )$. Formally:
For definitions and synonyms we build distractors from all of these sets (with a similar restriction on the depth of sister distractors as noted above). In doing this, we can systematically investigate model performance on a wide range of distractor sets.
Dataset Probes and Construction ::: WordNetQA ::: Perturbations and Semantic Clusters
Based on how we generate data, for each concept $c$ (i.e., atomic WordNet synset) and probe type (i.e., definitions, hypernymy, etc.), we have a wide variety of questions related to $c$ that manipulate 1) the complexity of reasoning that is involved (e.g., the number of inferential hops) and; 2) the types of distractors (or distractor perturbations) that are employed. We call such sets semantic clusters. As we describe in the next section, semantic clusters allow us to devise new types of evaluation that reveal whether models have comprehensive and consistent knowledge of target concepts (e.g., evaluating whether a model can correctly answer several questions associated with a concept, as opposed to a few disjoint instances).
Details of the individual datasets are shown in Table TABREF12. From these sets, we follow BIBREF22 in allocating a maximum of 3k examples for training and reserve the rest for development and testing. Since we are interested in probing, having large held-out sets allows us to do detailed analysis and cluster-based evaluation.
Dataset Probes and Construction ::: DictionaryQA
The DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.
Dataset Probes and Construction ::: DictionaryQA ::: Example and Distractor Generation.
To generate gold questions and answers, we use the same generation templates for definitions exemplified in Figure TABREF8 for WordNetQA. To generate distractors, we simply take alternative definitions for the target words that represent a different word sense (e.g., the alternative definitions of gift shown in Table TABREF14), as well as randomly chosen definitions if needed to create a 5-way multiple choice question. As above, we reserve a maximum of 3k examples for training. Since we have only 9k examples in total in this dataset (see WordSense in Table TABREF12), we also reserve 3k each for development and testing.
We note that initial attempts to build this dataset through standard random splitting gave rise to certain systematic biases that were exploited by the choice-only baseline models described in the next section, and hence inflated overall model scores. After several efforts at filtering we found that, among other factors, using definitions from entries without example sentences as distractors (e.g., the first two entries in Table TABREF14) had a surprising correlation with such biases. This suggests that possible biases involving differences between dictionary entries with and without examples can taint the resulting automatically generated MCQA dataset (for more discussion on the pitfalls involved with automatic dataset construction, see Section SECREF5).
Probing Methodology and Modeling
Given the probes above, we now can start to answer the empirical questions posed at the beginning. Our main focus is on looking at transformer-based MCQA models trained in the science domain (using the benchmarks shown in Table TABREF21). In this section, we provide details of MCQA and the target models, as well as several baselines that we use to sanity check our new datasets. To evaluate model competence, we look at a combination of model performance after science pre-training and after additional model fine-tuning using the lossless inoculation strategy of BIBREF22 (Section SECREF22). In Section SECREF24, we also discuss a cluster-level accuracy metric for measuring performance over semantic clusters.
Probing Methodology and Modeling ::: Task Definition and Modeling
Given a dataset $D =\lbrace (\textbf {q}^{(d)}, \lbrace a_{1}^{(d)},..., a_{N}^{(d)}\rbrace ) \rbrace _{d}^{\mid D \mid }$ consisting of pairs of questions stems $\textbf {q}$ and answer choices $a_{i}$, the goal is to find the correct answer $a_{i^{*}}$ that correctly answers each $\textbf {q}$. Throughout this paper, we look at 5-way multiple-choice problems (i.e., where each $N=5$).
Probing Methodology and Modeling ::: Task Definition and Modeling ::: Question+Answer Encoder.
To model this, our investigation centers around the use of the transformer-based BIBREF25 BERT encoder and fine-tuning approach of BIBREF7 (see also BIBREF6). For each question and individual answer pair $q^{(j)}_{a_{i}}$, we assume the following rendering of this input:
which is run through the pre-trained BERT encoder to generate a representation for $ q^{(j)}_{a_{i}}$ using the hidden state representation for CLS (i.e., the classifier token) $\textbf {c}_{i}$:
The probability of a given answer $p^{(j)}_{i}$ is then computed as $p^{(j)}_{i} \propto e^{\textbf {v}\cdot \textbf {c}^{(j)}_{i}}$, which uses an additional set of classification parameters $\textbf {v} \in \mathbb {R}^{H}$ that are optimized (along with the full transformer network) by taking the final loss of the probability of each correct answer $p_{i^{*}}$ over all answer choices:
We specifically use BERT-large uncased with whole-word masking, as well as the RoBERTa-large model from BIBREF9, which is a more robustly trained version of the original BERT model. Our system uses the implementations provided in AllenNLP BIBREF39 and Huggingface BIBREF40.
Probing Methodology and Modeling ::: Task Definition and Modeling ::: Baselines and Sanity Checks.
When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models.
Following the notation from BIBREF0, for any given sequence $s$ of tokens in $\lbrace q^{(j)}, a_{1}^{(j)},...,a_{N}^{(j)}\rbrace $ in $D$, an encoding of $s$ is given as $h_{s}^{(j)} = \textbf {BiLSTM}(\textsc {embed}(s)) \in \mathbb {R}^{|s| \times 2h}$ (where $h$ is the dimension of the hidden state in each directional network, and embed$(\cdot )$ is an embedding function that assigns token-level embeddings to each token in $s$). A contextual representation for each $s$ is then built by applying an element-wise max operation over $h_{s}$ as follows:
With these contextual representations, different baseline models can be constructed. For example, a Choice-Only model, which is a variant of the well-known hypothesis-only baseline used in NLI BIBREF46, scores each choice $c_{i}$ in the following way:
for $\textbf {W}^{T} \in \mathbb {R}^{2h}$ independently of the question and assigns a probability to each answer $p_{i}^{(j)} \propto e^{\alpha _{i}^{(j)}}$.
A slight variant of this model, the Choice-to-choice model, tries to single out a given answer choice relative to other choices by scoring all choice pairs $\alpha _{i,i^{\prime }}^{(j)} = \textsc {Att}(r^{(j)}_{c_{i}},r^{(j)}_{c_{i^{\prime }}}) \in \mathbb {R}$ using a learned attention mechanism Att and finding the choice with the minimal similarity to other options (for full details, see their original paper). In using these partial-input baselines, which we train directly on each target probe, we can check whether systematic biases related to answer choices were introduced into the data creation process.
A Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score $\alpha ^{(j)}_{q,i} = \textsc {Att}(r^{(j)}_{q},r^{(j)}_{c_{i}}) \in \mathbb {R}$ as above. Here we also experiment with using ESIM BIBREF47 to generate the contextual representations $r$, as well as a simpler VecSimilarity model that measures the average vector similarity between question and answer tokens: $\alpha ^{(j)}_{q,i} = \textsc {Sim}(\textsc {embed}(q^{(j)}),\textsc {embed}(c^{(j)}_{i}))$. In contrast to the models above, these sets of baselines are used to check for artifacts between questions and answers that are not captured in the partial-input baselines (see discussion in BIBREF49) and ensure that the overall MCQA tasks are sufficiently difficult for our transformer models.
Probing Methodology and Modeling ::: Inoculation and Pre-training
Using the various models introduced above, we train these models on benchmark tasks in the science domain and look at model performance on our probes with and without additional training on samples of probe data, building on the idea of inoculation from BIBREF23. Model inoculation is the idea of continuing to train models on new challenge tasks (in our cases, separately for each probe) using only a small amount of examples. Unlike in ordinary fine-tuning, the goal is not to learn an entirely re-purposed model, but to improve on (or vaccinate against) particular phenomena (e.g., our synthetic probes) that potentially deviate from a model's original training distribution (but that nonetheless might involve knowledge already contained in the model).
In the variant proposed in BIBREF22, for each pre-trained (science) model and architecture $M_{a}$ we continue training the model on $k$ new probe examples (with a maximum of $k=$ 3k) under a set of different hyper-parameter configurations $j \in \lbrace 1, ..., J\rbrace $ and identify, for each $k$, the model $M_{*}^{a,k}$ with the best aggregate performance $S$ on the original (orig) and new task:
As in BIBREF22, we found all models to be especially sensitive to different learning rates, and performed comprehensive hyper-parameters searches that also manipulate the number of iterations and random seeds used.
Using this methodology, we can see how much exposure to new data it takes for a given model to master a new task, and whether there are phenomena that stress particular models (e.g., lead to catastrophic forgetting of the original task). Given the restrictions on the number of fine-tuning examples, our assumption is that when models are able to maintain good performance on their original task during inoculation, the quickness with which they are able to learn the inoculated task provides evidence of prior competence, which is precisely what we aim to probe. To measure past performance, we define a model's inoculation cost as the difference in the performance of this model on its original task before and after inoculation.
We pre-train on an aggregated training set of the benchmark science exams detailed in Table TABREF21, and created an aggregate development set of around 4k science questions for evaluating overall science performance and inoculation costs. To handle the mismatch between number of answer choices in these sets, we made all sets 5-way by adding empty answers as needed. We also experimented with a slight variant of inoculation, called add-some inoculation, which involves balancing the inoculation training sets with naturalistic science questions. We reserve the MCQL dataset in Table TABREF21 for this purpose, and experiment with balancing each probe example with a science example (x1 matching) and adding twice as many science questions (x2 matching, up to 3k) for each new example.
Probing Methodology and Modeling ::: Evaluating Model Competence
The standard way to evaluate our MCQA models is by looking at the overall accuracy of the correct answer prediction, or what we call instance-level accuracy (as in Table TABREF25). Given the nature of our data and the existence of semantic clusters as detailed in Section SECREF11 (i.e., sets of questions and answers under different distractor choices and inference complexity), we also measure a model's cluster-level (or strict cluster) accuracy, which requires correctly answering all questions in a cluster. Example semantic clusters are shown in Table TABREF30; in the first case, there are 6 ISA$^\uparrow $ questions (including perturbations) about the concept trouser.n.01 (e.g., involving knowing that trousers are a type of consumer good and garment/clothing), which a model must answer in order to receive full credit.
Our cluster-based analysis is motivated by the idea that if a model truly knows the meaning of a given concept, such as the concept of trousers, then it should be able to answer arbitrary questions about this concept without sensitivity to varied distractors. While our strict cluster metric is simplistic, it takes inspiration from work on visual QA BIBREF53, and allows us to evaluate how consistent and robust models are across our different probes, and to get insight into whether errors are concentrated on a small set of concepts or widespread across clusters.
Results and Findings
In this section, we provide the results of the empirical questions first introduced in Figure FIGREF1, starting with the results of our baseline models.
Results and Findings ::: Are our Probes Sufficiently Challenging?
As shown in Table TABREF25, most of our partial-input baselines (i.e., Choice-Only and Choice-to-Choice models) failed to perform well on our dataset probes across a wide range of models, showing that such probes are generally immune from biases relating to how distractors were generated. As already discussed in Section SECREF13, however, initial versions of the DictionaryQA dataset had unforeseen biases partly related to whether distractors were sampled from entries without example sentences, which resulted in high Choice-Only-GloVe scores ranging around 56% accuracy before a filtering step was applied to remove these distractors.
We had similar issues with the hypernymy probe which, even after a filtering step that used our Choice-to-Choice-GloVe model, still leads to high results on the BERT and RoBERTa choice-only models. Given that several attempts were made to entirely de-duplicate the different splits (both in terms of gold answers and distractor types), the source of these biases is not at all obvious, which shows how easy it is for unintended biases in expert knowledge to appear in the resulting datasets and the importance of having rigorous baselines. We also note the large gap in some cases between the BERT and RoBERTa versus GloVe choice-only models, which highlights the need for having partial-input baselines that use the best available models.
Using a more conventional set of Task-Specific QA models (i.e., the LSTM-based Question-to-Choice models trained directly on the probes), we can see that results are not particularly strong on any of the datasets, suggesting that our probes are indeed sufficiently challenging and largely immune from overt artifacts. The poor performance of the VecSimilarity (which uses pre-trained Word2Vec embeddings without additional training) provides additional evidence that elementary lexical matching strategies are insufficient for solving any of the probing tasks.
Results and Findings ::: How well do pre-trained MCQA models do?
Science models that use non-transformer based encoders, such as the ESIM model with GloVe and ELMO, perform poorly across all probes, in many cases scoring near random chance, showing limits to how well they generalize from science to other tasks even with pre-trained GloVe and ELMO embeddings. In sharp contrast, the transformer models have mixed results, the most striking result being the RoBERTa models on the definitions and synonymy probes (achieving a test accuracy of 77% and 61%, respectively), which outperform several of the task-specific LSTM models trained directly on the probes. At first glance, this suggests that RoBERTa, which generally far outpaces even BERT across most probes, has high competence of definitions and synonyms even without explicit training on our new tasks.
Given the controlled nature of our probes, we can get a more detailed view of how well the science models are performing across different reasoning and distractor types, as shown in the first column of Figure FIGREF28 for ESIM and RoBERTa. The ESIM science model without training has uniformly poor performance across all categories, whereas the performance of RoBERTa is more varied. Across all datasets and number of hops (i.e., the rows in the heat maps), model performance for RoBERTa is consistently highest among examples with random distractors (i.e., the first column), and lowest in cases involving distractors that are closest in WordNet space (e.g., sister and ISA, or up/down, distractors of distance $k^{\prime }=1$). This is not surprising, given that, in the first case, random distractors are likely to be the easiest category (and the opposite for distractors close in space), but suggests that RoBERTa might only be getting the easiest cases correct.
Model performance also clearly degrades for hypernymy and hyponymy across all models as the number of hops $k$ increases (see red dashed boxes). For example, problems that involve hyponym reasoning with sister distractors of distance $k^{\prime }=1$ (i.e., the second column) degrades from 47% to 15% when the number of hops $k$ increases from 1 to 4. This general tendency persists even after additional fine-tuning, as we discuss next, and gives evidence that models are limited in their capacity for certain types of multi-hop inferences.
As discussed by BIBREF26, the choice of generation templates can have a significant effect on model performance. The results so far should therefore be regarded as a lower bound on model competence. It is possible that model performance is high for definitions, for example, because the associated templates best align with the science training distribution (which we know little about). For this reason, the subsequent inoculation step is important—it gives the model an opportunity to learn about our target templates and couple this learned knowledge with its general knowledge acquired during pre-training and science training (which is, again, what we aim to probe).
Results and Findings ::: Can Models Be Effectively Inoculated?
Model performance after additional fine-tuning, or inoculation, is shown in the last 3 rows of Table TABREF25, along with learning curves shown in Figure FIGREF29 for a selection of probes and models. In the former case, the performance represents the model (and inoculation amount) with the highest aggregate performance over the old task and new probe. Here we again see the transformer-based models outperform non-transformer models, and that better models correlate with lower inoculation costs. For example, when inoculating on synonymy, the cost for ESIM is around 7% reduced accuracy on its original task, as opposed to $< 1$% and around 1% for BERT and RoBERTa, respectively. This shows the high capacity for transformer models to absorb new tasks with minimal costs, as also observed in BIBREF22 for NLI.
As shown in Figure FIGREF29, transformer models tend to learn most tasks fairly quickly while keeping constant scores on their original tasks (i.e., the flat dashed lines observed in plots 1-4), which gives evidence of high competence. In both cases, add-some inoculation proves to be a cheap and easy way to 1) improve scores on the probing tasks (i.e., the solid black and blue lines in plot 1) and; 2) minimize loss on science (e.g., the blue and black dashed lines in plots 2-4). The opposite is the case for ESIM (plots 5-6); models are generally unable to simultaneously learn individual probes without degrading on their original task, and adding more science data during inoculation confuses models on both tasks.
As shown in Figure FIGREF28, RoBERTa is able to significantly improve performance across most categories even after inoculation with a mere 100 examples (the middle plot), which again provides strong evidence of prior competence. As an example, RoBERTa improves on 2-hop hyponymy inference with random distractors by 18% (from 59% to 77%). After 3k examples, the model has high performance on virtually all categories (the same score increases from 59% to 87%), however results still tends to degrade as a function of hop and distractor complexity, as discussed above.
Despite the high performance of our transformer models after inoculation, model performance on most probes (with the exception of Definitions) averages around 80% for our best models. This suggests that there is still considerable room for improvement, especially for synonymy and word sense, which is a topic that we discuss more in Section SECREF6.
Results and Findings ::: Are Models Consistent across Clusters?
Table TABREF32 shows cluster-level accuracies for the different WordNetQA probes. As with performance across the different inference/distractor categories, these results are mixed. For some probes, such as definitions, our best models appear to be rather robust; e.g., our RoBERTa model has a cluster accuracy of $75\%$, meaning that it can answer all questions perfectly for 75% of the target concepts and that errors are concentrated on a small minority (25%) of concepts. On synonymy and hypernymy, both BERT and RoBERTa appear robust on the majority of concepts, showing that errors are similarly concentrated. In contrast, our best model on hyponymy has an accuracy of 36%, meaning that its errors are spread across many concepts, thus suggesting less robustness.
Table TABREF30 shows a selection of semantic clusters involving ISA reasoning, as well as the model performance over different answers (shown symbolically) and perturbations. For example, in the the second case, the cluster is based around the concept/synset oppose.v.06 and involves 4 inferences and a total 24 questions (i.e., inferences with perturbations). Our weakest model, ESIM, answers only 5 out of 24 questions correctly, whereas RoBERTa gets 21/24. In the other cases, RoBERTa gets all clusters correct, whereas BERT and ESIM get none of them correct.
We emphasize that these results only provide a crude look into model consistency and robustness. Recalling again the details in Table TABREF12, probes differ in terms of average size of clusters. Hyponymy, in virtue of having many more questions per cluster, might simply be a much more difficult dataset. In addition, such a strict evaluation does not take into account potential errors inside of clusters, which is an important issue that we discuss in the next section. We leave addressing such issues and coming up with more insightful cluster-based metrics for future work.
Discussion and Conclusion
We presented several new challenge datasets and a novel methodology for automatically building such datasets from knowledge graphs and taxonomies. We used these to probe state-of-the-art open-domain QA models (centering around models based on variants of BERT). While our general methodology is amendable to any target knowledge resource or QA model/domain, we focus on probing definitions and ISA knowledge using open-source dictionaries and MCQA models trained in the science domain.
We find, consistent with recent probing studies BIBREF26, that transformer-based models have a remarkable ability to answer questions that involve complex forms of relational knowledge, both with and without explicit exposure to our new target tasks. In the latter case, a newer RoBERTa model trained only on benchmark science tasks is able to outperform several task-specific LSTM-based models trained directly on our probing data. When re-trained on small samples (e.g., 100 examples) of probing data using variations of the lossless inoculation strategy from BIBREF22, RoBERTa is able to master many aspects of our probes with virtually no performance loss on its original QA task.
These positive results suggest that transformer-based models, especially models additionally fine-tuned on small samples of synthetic data, can be used in place of task-specific models used for querying relational knowledge, as has already been done for targeted tasks such as word sense disambiguation BIBREF54. Since models seem to already contain considerable amounts of relational knowledge, our simple inoculation strategy, which tries to nudge models to bring out this knowledge explicitly, could serve as a cheaper alternative to recent attempts to build architectures that explicitly incorporate structured knowledge BIBREF55; we see many areas where our inoculation strategy could be improved for such purposes, including having more complex loss functions that manage old and new information, as well as using techniques that take into account network plasticity BIBREF56.
The main appeal of using automatically generate datasets is the ability to systematically manipulate and control the complexity of target questions, which allows for more controlled experimentation and new forms of evaluation. Despite the positive results described above, results that look directly at the effect of different types of distractors and the complexity of reasoning show that our best models, even after additional fine-tuning, struggle with certain categories of hard distractors and multi-hop inferences. For some probes, our cluster-based analysis also reveals that errors are widespread across concept clusters, suggesting that models are not always consistent and robust. These results, taken together with our findings about the vulnerability of synthetic datasets to systematic biases, suggest that there is much room for improvement and that the positive results should be taken with a grain of salt. Developing better ways to evaluate semantic clusters and model robustness would be a step in this direction.
We emphasize that using synthetic versus naturalistic QA data comes with important trade-offs. While we are able to generate large amounts of systematically controlled data at virtually no cost or need for manual annotation, it is much harder to validate the quality of such data at such a scale and such varying levels of complexity. Conversely, with benchmark QA datasets, it is much harder to perform the type of careful manipulations and cluster-based analyses we report here. While we assume that the expert knowledge we employ, in virtue of being hand-curated by human experts, is generally correct, we know that such resources are fallible and error-prone. Initial crowd-sourcing experiments that look at validating samples of our data show high agreement across probes and that human scores correlate with the model trends across the probe categories. More details of these studies are left for future work.
|
Is WordNet useful for taxonomic reasoning for this task?
|
Unanswerable
| 6,391
|
qasper
|
8k
|
Introduction
Over the past two decades, the emergence of social media has enabled the proliferation of traceable human behavior. The content posted by users can reflect who their friends are, what topics they are interested in, or which company they are working for. At the same time, users are listing a number of profile fields to define themselves to others. The utilization of such metadata has proven important in facilitating further developments of applications in advertising BIBREF0 , personalization BIBREF1 , and recommender systems BIBREF2 . However, profile information can be limited, depending on the platform, or it is often deliberately omitted BIBREF3 . To uncloak this information, a number of studies have utilized social media users' footprints to approximate their profiles.
This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach.
In addition, we explore how meaning changes depending on the occupational context. By leveraging word embeddings, we seek to quantify how, for example, cloud might mean a separate concept (e.g., condensed water vapor) in the text written by users that work in environmental jobs while it might be used differently by users in technology occupations (e.g., Internet-based computing).
Specifically, this paper makes four main contributions. First, we build a large, industry-annotated dataset that contains over 20,000 blog users. In addition to their posted text, we also link a number of user metadata including their gender, location, occupation, introduction and interests.
Second, we build content-based classifiers for the industry prediction task and study the effect of incorporating textual features from the users' profile metadata using various meta-classification techniques, significantly improving both the overall accuracy and the average per industry accuracy.
Next, after examining which words are indicative for each industry, we build vector-space representations of word meanings and calculate one deviation for each industry, illustrating how meaning is differentiated based on the users' industries. We qualitatively examine the resulting industry-informed semantic representations of words by listing the words per industry that are most similar to job related and general interest terms.
Finally, we rank the different industries based on the normalized relative frequencies of emotionally charged words (positive and negative) and, in addition, discover that, for both genders, these frequencies do not statistically significantly correlate with an industry's gender dominance ratio.
After discussing related work in Section SECREF2 , we present the dataset used in this study in Section SECREF3 . In Section SECREF4 we evaluate two feature selection methods and examine the industry inference problem using the text of the users' postings. We then augment our content-based classifier by building an ensemble that incorporates several metadata classifiers. We list the most industry indicative words and expose how each industrial semantic field varies with respect to a variety of terms in Section SECREF5 . We explore how the frequencies of emotionally charged words in each gender correlate with the industries and their respective gender dominance ratio and, finally, conclude in Section SECREF6 .
Related Work
Alongside the wide adoption of social media by the public, researchers have been leveraging the newly available data to create and refine models of users' behavior and profiling. There exists a myriad research that analyzes language in order to profile social media users. Some studies sought to characterize users' personality BIBREF9 , BIBREF10 , while others sequenced the expressed emotions BIBREF11 , studied mental disorders BIBREF12 , and the progression of health conditions BIBREF13 . At the same time, a number of researchers sought to predict the social media users' age and/or gender BIBREF14 , BIBREF15 , BIBREF16 , while others targeted and analyzed the ethnicity, nationality, and race of the users BIBREF17 , BIBREF18 , BIBREF19 . One of the profile fields that has drawn a great deal of attention is the location of a user. Among others, Hecht et al. Hecht11 predicted Twitter users' locations using machine learning on nationwide and state levels. Later, Han et al. Han14 identified location indicative words to predict the location of Twitter users down to the city level.
As a separate line of research, a number of studies have focused on discovering the political orientation of users BIBREF15 , BIBREF20 , BIBREF21 . Finally, Li et al. Li14a proposed a way to model major life events such as getting married, moving to a new place, or graduating. In a subsequent study, BIBREF22 described a weakly supervised information extraction method that was used in conjunction with social network information to identify the name of a user's spouse, the college they attended, and the company where they are employed.
The line of work that is most closely related to our research is the one concerned with understanding the relation between people's language and their industry. Previous research from the fields of psychology and economics have explored the potential for predicting one's occupation from their ability to use math and verbal symbols BIBREF23 and the relationship between job-types and demographics BIBREF24 . More recently, Huang et al. Huang15 used machine learning to classify Sina Weibo users to twelve different platform-defined occupational classes highlighting the effect of homophily in user interactions. This work examined only users that have been verified by the Sina Weibo platform, introducing a potential bias in the resulting dataset. Finally, Preotiuc-Pietro et al. Preoctiuc15 predicted the occupational class of Twitter users using the Standard Occupational Classification (SOC) system, which groups the different jobs based on skill requirements. In that work, the data collection process was limited to only users that specifically mentioned their occupation in their self-description in a way that could be directly mapped to a SOC occupational class. The mapping between a substring of their self-description and a SOC occupational class was done manually. Because of the manual annotation step, their method was not scalable; moreover, because they identified the occupation class inside a user self-description, only a very small fraction of the Twitter users could be included (in their case, 5,191 users).
Both of these recent studies are based on micro-blogging platforms, which inherently restrict the number of characters that a post can have, and consequently the way that users can express themselves.
Moreover, both studies used off-the-shelf occupational taxonomies (rather than self-declared occupation categories), resulting in classes that are either too generic (e.g., media, welfare and electronic are three of the twelve Sina Weibo categories), or too intermixed (e.g., an assistant accountant is in a different class from an accountant in SOC). To address these limitations, we investigate the industry prediction task in a large blog corpus consisting of over 20K American users, 40K web-blogs, and 560K blog posts.
Dataset
We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed.
For each of these bloggers, we retrieve all their blogs, and for each of these blogs we download the 21 most recent blog postings. We then clean these blog posts of HTML tags and tokenize them, and drop those bloggers whose cumulative textual content in their posts is less than 600 characters. Following these guidelines, we identified all the U.S. bloggers with completed industry information.
Traditionally, standardized industry taxonomies organize economic activities into groups based on similar production processes, products or services, delivery systems or behavior in financial markets. Following such assumptions and regardless of their many similarities, a tomato farmer would be categorized into a distinct industry from a tobacco farmer. As demonstrated in Preotiuc-Pietro et al. Preoctiuc15 such groupings can cause unwarranted misclassifications.
The Blogger platform provides a total of 39 different industry options. Even though a completed industry value is an implicit text annotation, we acknowledge the same problem noted in previous studies: some categories are too broad, while others are very similar. To remedy this and following Guibert et al. Guibert71, who argued that the denominations used in a classification must reflect the purpose of the study, we group the different Blogger industries based on similar educational background and similar technical terminology. To do that, we exclude very general categories and merge conceptually similar ones. Examples of broad categories are the Education and the Student options: a teacher could be teaching in any concentration, while a student could be enrolled in any discipline. Examples of conceptually similar categories are the Investment Banking and the Banking options.
The final set of categories is shown in Table TABREF1 , along with the number of users in each category. The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset.
Text-based Industry Modeling
After collecting our dataset, we split it into three sets: a train set, a development set, and a test set. The sizes of these sets are 17,880, 2,500, and 2,500 users, respectively, with users randomly assigned to these sets. In all the experiments that follow, we evaluate our classifiers by training them on the train set, configure the parameters and measure performance on the development set, and finally report the prediction accuracy and results on the test set. Note that all the experiments are performed at user level, i.e., all the data for one user is compiled into one instance in our data sets.
To measure the performance of our classifiers, we use the prediction accuracy. However, as shown in Table TABREF1 , the available data is skewed across categories, which could lead to somewhat distorted accuracy numbers depending on how well a model learns to predict the most populous classes. Moreover, accuracy alone does not provide a great deal of insight into the individual performance per industry, which is one of the main objectives in this study. Therefore, in our results below, we report: (1) micro-accuracy ( INLINEFORM0 ), calculated as the percentage of correctly classified instances out of all the instances in the development (test) data; and (2) macro-accuracy ( INLINEFORM1 ), calculated as the average of the per-category accuracies, where the per-category accuracy is the percentage of correctly classified instances out of the instances belonging to one category in the development (test) data.
Leveraging Blog Content
In this section, we seek the effectiveness of using solely textual features obtained from the users' postings to predict their industry.
The industry prediction baseline Majority is set by discovering the most frequently featured class in our training set and picking that class in all predictions in the respective development or testing set.
After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. As seen in Figure FIGREF3 , we can far exceed the Majority baseline performance by incorporating basic language signals into machine learning algorithms (173% INLINEFORM0 improvement).
We additionally explore the potential of improving our text classification task by applying a number of feature ranking methods and selecting varying proportions of top ranked features in an attempt to exclude noisy features. We start by ranking the different features, w, according to their Information Gain Ratio score (IGR) with respect to every industry, i, and training our classifier using different proportions of the top features. INLINEFORM0 INLINEFORM1
Even though we find that using the top 95% of all the features already exceeds the performance of the All Words model on the development data, we further experiment with ranking our features with a more aggressive formula that heavily promotes the features that are tightly associated with any industry category. Therefore, for every word in our training set, we define our newly introduced ranking method, the Aggressive Feature Ranking (AFR), as: INLINEFORM0
In Figure FIGREF3 we illustrate the performance of all four methods in our industry prediction task on the development data. Note that for each method, we provide both the accuracy ( INLINEFORM0 ) and the average per-class accuracy ( INLINEFORM1 ). The Majority and All Words methods apply to all the features; therefore, they are represented as a straight line in the figure. The IGR and AFR methods are applied to varying subsets of the features using a 5% step.
Our experiments demonstrate that the word choice that the users make in their posts correlates with their industry. The first observation in Figure FIGREF3 is that the INLINEFORM0 is proportional to INLINEFORM1 ; as INLINEFORM2 increases, so does INLINEFORM3 . Secondly, the best result on the development set is achieved by using the top 90% of the features using the AFR method. Lastly, the improvements of the IGR and AFR feature selections are not substantially better in comparison to All Words (at most 5% improvement between All Words and AFR), which suggest that only a few noisy features exist and most of the words play some role in shaping the “language" of an industry.
As a final evaluation, we apply on the test data the classifier found to work best on the development data (AFR feature selection, top 90% features), for an INLINEFORM0 of 0.534 and INLINEFORM1 of 0.477.
Leveraging User Metadata
Together with the industry information and the most recent postings of each blogger, we also download a number of accompanying profile elements. Using these additional elements, we explore the potential of incorporating users' metadata in our classifiers.
Table TABREF7 shows the different user metadata we consider together with their coverage percentage (not all users provide a value for all of the profile elements). With the exception of the gender field, the remaining metadata elements shown in Table TABREF7 are completed by the users as a freely editable text field. This introduces a considerable amount of noise in the set of possible metadata values. Examples of noise in the occupation field include values such as “Retired”, “I work.”, or “momma” which are not necessarily informative for our industry prediction task.
To examine whether the metadata fields can help in the prediction of a user's industry, we build classifiers using the different metadata elements. For each metadata element that has a textual value, we use all the words in the training set for that field as features. The only two exceptions are the state field, which is encoded as one feature that can take one out of 50 different values representing the 50 U.S. states; and the gender field, which is encoded as a feature with a distinct value for each user gender option: undefined, male, or female.
As shown in Table TABREF9 , we build four different classifiers using the multinomial NB algorithm: Occu (which uses the words found in the occupation profile element), Intro (introduction), Inter (interests), and Gloc (combined gender, city, state).
In general, all the metadata classifiers perform better than our majority baseline ( INLINEFORM0 of 18.88%). For the Gloc classifier, this result is in alignment with previous studies BIBREF24 . However, the only metadata classifier that outperforms the content classifier is the Occu classifier, which despite missing and noisy occupation values exceeds the content classifier's performance by an absolute 3.2%.
To investigate the promise of combining the five different classifiers we have built so far, we calculate their inter-prediction agreement using Fleiss's Kappa BIBREF25 , as well as the lower prediction bounds using the double fault measure BIBREF26 . The Kappa values, presented in the lower left side of Table TABREF10 , express the classification agreement for categorical items, in this case the users' industry. Lower values, especially values below 30%, mean smaller agreement. Since all five classifiers have better-than-baseline accuracy, this low agreement suggests that their predictions could potentially be combined to achieve a better accumulated result.
Moreover, the double fault measure values, which are presented in the top-right hand side of Table TABREF10 , express the proportion of test cases for which both of the two respective classifiers make false predictions, essentially providing the lowest error bound for the pairwise ensemble classifier performance. The lower those numbers are, the greater the accuracy potential of any meta-classification scheme that combines those classifiers. Once again, the low double fault measure values suggest potential gain from a combination of the base classifiers into an ensemble of models.
After establishing the promise of creating an ensemble of classifiers, we implement two meta-classification approaches. First, we combine our classifiers using features concatenation (or early fusion). Starting with our content-based classifier (Text), we successively add the features derived from each metadata element. The results, both micro- and macro-accuracy, are presented in Table TABREF12 . Even though all these four feature concatenation ensembles outperform the content-based classifier in the development set, they fail to outperform the Occu classifier.
Second, we explore the potential of using stacked generalization (or late fusion) BIBREF27 . The base classifiers, referred to as L0 classifiers, are trained on different folds of the training set and used to predict the class of the remaining instances. Those predictions are then used together with the true label of the training instances to train a second classifier, referred to as the L1 classifier: this L1 is used to produce the final prediction on both the development data and the test data. Traditionally, stacking uses different machine learning algorithms on the same training data. However in our case, we use the same algorithm (multinomial NB) on heterogeneous data (i.e., different types of data such as content, occupation, introduction, interests, gender, city and state) in order to exploit all available sources of information.
The ensemble learning results on the development set are shown in Table TABREF12 . We notice a constant improvement for both metrics when adding more classifiers to our ensemble except for the Gloc classifier, which slightly reduces the performance. The best result is achieved using an ensemble of the Text, Occu, Intro, and Inter L0 classifiers; the respective performance on the test set is an INLINEFORM0 of 0.643 and an INLINEFORM1 of 0.564. Finally, we present in Figure FIGREF11 the prediction accuracy for the final classifier for each of the different industries in our test dataset. Evidently, some industries are easier to predict than others. For example, while the Real Estate and Religion industries achieve accuracy figures above 80%, other industries, such as the Banking industry, are predicted correctly in less than 17% of the time. Anecdotal evidence drawn from the examination of the confusion matrix does not encourage any strong association of the Banking class with any other. The misclassifications are roughly uniform across all other classes, suggesting that the users in the Banking industry use language in a non-distinguishing way.
Qualitative Analysis
In this section, we provide a qualitative analysis of the language of the different industries.
Top-Ranked Words
To conduct a qualitative exploration of which words indicate the industry of a user, Table TABREF14 shows the three top-ranking content words for the different industries using the AFR method.
Not surprisingly, the top ranked words align well with what we would intuitively expect for each industry. Even though most of these words are potentially used by many users regardless of their industry in our dataset, they are still distinguished by the AFR method because of the different frequencies of these words in the text of each industry.
Industry-specific Word Similarities
Next, we examine how the meaning of a word is shaped by the context in which it is uttered. In particular, we qualitatively investigate how the speakers' industry affects meaning by learning vector-space representations of words that take into account such contextual information. To achieve this, we apply the contextualized word embeddings proposed by Bamman et al. Bamman14, which are based on an extension of the “skip-gram" language model BIBREF28 .
In addition to learning a global representation for each word, these contextualized embeddings compute one deviation from the common word embedding representation for each contextual variable, in this case, an industry option. These deviations capture the terms' meaning variations (shifts in the INLINEFORM0 -dimensional space of the representations, where INLINEFORM1 in our experiments) in the text of the different industries, however all the embeddings are in the same vector space to allow for comparisons to one another.
Using the word representations learned for each industry, we present in Table TABREF16 the terms in the Technology and the Tourism industries that have the highest cosine similarity with a job-related word, customers. Similarly, Table TABREF17 shows the words in the Environment and the Tourism industries that are closest in meaning to a general interest word, food. More examples are given in the Appendix SECREF8 .
The terms that rank highest in each industry are noticeably different. For example, as seen in Table TABREF17 , while food in the Environment industry is similar to nutritionally and locally, in the Tourism industry the same word relates more to terms such as delicious and pastries. These results not only emphasize the existing differences in how people in different industries perceive certain terms, but they also demonstrate that those differences can effectively be captured in the resulting word embeddings.
Emotional Orientation per Industry and Gender
As a final analysis, we explore how words that are emotionally charged relate to different industries. To quantify the emotional orientation of a text, we use the Positive Emotion and Negative Emotion categories in the Linguistic Inquiry and Word Count (LIWC) dictionary BIBREF29 . The LIWC dictionary contains lists of words that have been shown to correlate with the psychological states of people that use them; for example, the Positive Emotion category contains words such as “happy,” “pretty,” and “good.”
For the text of all the users in each industry we measure the frequencies of Positive Emotion and Negative Emotion words normalized by the text's length. Table TABREF20 presents the industries' ranking for both categories of words based on their relative frequencies in the text of each industry.
We further perform a breakdown per-gender, where we once again calculate the proportion of emotionally charged words in each industry, but separately for each gender. We find that the industry rankings of the relative frequencies INLINEFORM0 of emotionally charged words for the two genders are statistically significantly correlated, which suggests that regardless of their gender, users use positive (or negative) words with a relative frequency that correlates with their industry. (In other words, even if e.g., Fashion has a larger number of women users, both men and women working in Fashion will tend to use more positive words than the corresponding gender in another industry with a larger number of men users such as Automotive.)
Finally, motivated by previous findings of correlations between job satisfaction and gender dominance in the workplace BIBREF30 , we explore the relationship between the usage of Positive Emotion and Negative Emotion words and the gender dominance in an industry. Although we find that there are substantial gender imbalances in each industry (Appendix SECREF9 ), we did not find any statistically significant correlation between the gender dominance ratio in the different industries and the usage of positive (or negative) emotional words in either gender in our dataset.
Conclusion
In this paper, we examined the task of predicting a social media user's industry. We introduced an annotated dataset of over 20,000 blog users and applied a content-based classifier in conjunction with two feature selection methods for an overall accuracy of up to 0.534, which represents a large improvement over the majority class baseline of 0.188.
We also demonstrated how the user metadata can be incorporated in our classifiers. Although concatenation of features drawn both from blog content and profile elements did not yield any clear improvements over the best individual classifiers, we found that stacking improves the prediction accuracy to an overall accuracy of 0.643, as measured on our test dataset. A more in-depth analysis showed that not all industries are equally easy to predict: while industries such as Real Estate and Religion are clearly distinguishable with accuracy figures over 0.80, others such as Banking are much harder to predict.
Finally, we presented a qualitative analysis to provide some insights into the language of different industries, which highlighted differences in the top-ranked words in each industry, word semantic similarities, and the relative frequency of emotionally charged words.
Acknowledgments
This material is based in part upon work supported by the National Science Foundation (#1344257) and by the John Templeton Foundation (#48503). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation.
Additional Examples of Word Similarities
|
How many users do they look at?
|
22,880 users
| 4,160
|
qasper
|
8k
|
Introduction
Assembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-quality annotations. A practical approach for collecting a sufficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk). However, crowd workers in general are likely to provide noisy annotations BIBREF0 , BIBREF1 , BIBREF2 , an issue exacerbated by the technical nature of specialized content. Some of this noise may reflect worker quality and can be modeled BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 , but for some instances lay people may simply lack the domain knowledge to provide useful annotation.
In this paper we report experiments on the EBM-NLP corpus comprising crowdsourced annotations of medical literature BIBREF5 . We operationalize the concept of annotation difficulty and show how it can be exploited during training to improve information extraction models. We then obtain expert annotations for the abstracts predicted to be most difficult, as well as for a similar number of randomly selected abstracts. The annotation of highly specialized data and the use of lay and expert annotators allow us to examine the following key questions related to lay and expert annotations in specialized domains:
Can we predict item difficulty? We define a training instance as difficult if a lay annotator or an automated model disagree on its labeling. We show that difficulty can be predicted, and that it is distinct from inter-annotator agreement. Further, such predictions can be used during training to improve information extraction models.
Are there systematic differences between expert and lay annotations? We observe decidedly lower agreement between lay workers as compared to domain experts. Lay annotations have high precision but low recall with respect to expert annotations in the new data that we collected. More generally, we expect lay annotations to be lower quality, which may translate to lower precision, recall, or both, compared to expert annotations. Can one rely solely on lay annotations? Reasonable models can be trained using lay annotations alone, but similar performance can be achieved using markedly less expert data. This suggests that the optimal ratio of expert to crowd annotations for specialized tasks will depend on the cost and availability of domain experts. Expert annotations are preferable whenever its collection is practical. But in real-world settings, a combination of expert and lay annotations is better than using lay data alone.
Does it matter what data is annotated by experts? We demonstrate that a system trained on combined data achieves better predictive performance when experts annotate difficult examples rather than instances selected at i.i.d. random.
Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here.
Related Work
Crowdsourcing annotation is now a well-studied problem BIBREF7 , BIBREF0 , BIBREF1 , BIBREF2 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 .
There are also several surveys of crowdsourcing in biomedicine specifically BIBREF8 , BIBREF9 , BIBREF10 . Some work in this space has contrasted model performance achieved using expert vs. crowd annotated training data BIBREF11 , BIBREF12 , BIBREF13 . Dumitrache et al. Dumitrache:2018:CGT:3232718.3152889 concluded that performance is similar under these supervision types, finding no clear advantage from using expert annotators. This differs from our findings, perhaps owing to differences in design. The experts we used already hold advanced medical degrees, for instance, while those in prior work were medical students. Furthermore, the task considered here would appear to be of greater difficulty: even a system trained on $\sim $ 5k instances performs reasonably, but far from perfect. By contrast, in some of the prior work where experts and crowd annotations were deemed equivalent, a classifier trained on 300 examples can achieve very high accuracy BIBREF12 .
More relevant to this paper, prior work has investigated methods for `task routing' in active learning scenarios in which supervision is provided by heterogeneous labelers with varying levels of expertise BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 . The related question of whether effort is better spent collecting additional annotations for already labeled (but potentially noisily so) examples or novel instances has also been addressed BIBREF18 . What distinguishes the work here is our focus on providing an operational definition of instance difficulty, showing that this can be predicted, and then using this to inform task routing.
Application Domain
Our specific application concerns annotating abstracts of articles that describe the conduct and results of randomized controlled trials (RCTs). Experimentation in this domain has become easy with the recent release of the EBM-NLP BIBREF5 corpus, which includes a reasonably large training dataset annotated via crowdsourcing, and a modest test set labeled by individuals with advanced medical training. More specifically, the training set comprises 4,741 medical article abstracts with crowdsourced annotations indicating snippets (sequences) that describe the Participants (p), Interventions (i), and Outcome (o) elements of the respective RCT, and the test set is composed of 191 abstracts with p, i, o sequence annotations from three medical experts.
Table 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon.
An abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively.
Quantifying Task Difficulty
The test set includes annotations from both crowd workers and domain experts. We treat the latter as ground truth and then define the difficulty of sentences in terms of the observed agreement between expert and lay annotators. Formally, for annotation task $t$ and instance $i$ :
$$\text{Difficulty}_{ti} = \frac{\sum _{j=1}^n{f(\text{label}_{ij}, y_i})}{n}$$ (Eq. 3)
where $f$ is a scoring function that measures the quality of the label from worker $j$ for sentence $i$ , as compared to a ground truth annotation, $y_i$ . The difficulty score of sentence $i$ is taken as an average over the scores for all $n$ layworkers. We use Spearmans' correlation coefficient as a scoring function. Specifically, for each sentence we create two vectors comprising counts of how many times each token was annotated by crowd and expert workers, respectively, and calculate the correlation between these. Sentences with no labels are treated as maximally easy; those with only either crowd worker or expert label(s) are assumed maximally difficult.
The training set contains only crowdsourced annotations. To label the training data, we use a 10-fold validation like setting. We iteratively retrain the LSTM-CRF-Pattern sequence tagger of Patel et al. patel2018syntactic on 9 folds of the training data and use that trained model to predict labels for the 10th. In this way we obtain predictions on the full training set. We then use predicted spans as proxy `ground truth' annotations to calculate the difficulty score of sentences as described above; we normalize these to the [ $0, 1$ ] interval. We validate this approximation by comparing the proxy scores against reference scores over the test set, the Pearson's correlation coefficients are 0.57 for Population, 0.71 for Intervention and 0.68 for Outcome.
There exist many sentences that contain neither manual nor predicted annotations. We treat these as maximally easy sentences (with difficulty scores of 0). Such sentences comprise 51%, 42% and 36% for Population, Interventions and Outcomes data respectively, indicating that it is easier to identify sentences that have no Population spans, but harder to identify sentences that have no Interventions or Outcomes spans. This is intuitive as descriptions of the latter two tend to be more technical and dense with medical jargon.
We show the distribution of the automatically labeled scores for sentences that do contain spans in Figure 1 . The mean of the Population (p) sentence scores is significantly lower than that for other types of sentences (i and o), again indicating that they are easier on average to annotate. This aligns with a previous finding that annotating Interventions and Outcomes is more difficult than annotating Participants BIBREF5 .
Many sentences contain spans tagged by the LSTM-CRF-Pattern model, but missed by all crowd workers, resulting in a maximally difficult score (1). Inspection of such sentences revealed that some are truly difficult examples, but others are tagging model errors. In either case, such sentences have confused workers and/or the model, and so we retain them all as `difficult' sentences.
Content describing the p, i and o, respectively, is quite different. As such, one sentence usually contains (at most) only one of these three content types. We thus treat difficulty prediction for the respective label types as separate tasks.
Difficulty is not Worker Agreement
Our definition of difficulty is derived from agreement between expert and crowd annotations for the test data, and agreement between a predictive model and crowd annotations in the training data. It is reasonable to ask if these measures are related to inter-annotator agreement, a metric often used in language technology research to identify ambiguous or difficult items. Here we explicitly verify that our definition of difficulty only weakly correlates with inter-annotator agreement.
We calculate inter-worker agreement between crowd and expert annotators using Spearman's correlation coefficient. As shown in Table 2 , average agreement between domain experts are considerably higher than agreements between crowd workers for all three label types. This is a clear indication that the crowd annotations are noisier.
Furthermore, we compare the correlation between inter-annotator agreement and difficulty scores in the training data. Given that the majority of sentences do not contain a PICO span, we only include in these calculations those that contain a reference label. Pearson's r are 0.34, 0.30 and 0.31 for p, i and o, respectively, confirming that inter-worker agreement and our proposed difficulty score are quite distinct.
Predicting Annotation Difficulty
We treat difficulty prediction as a regression problem, and propose and evaluate neural model variants for the task. We first train RNN BIBREF19 and CNN BIBREF20 models.
We also use the universal sentence encoder (USE) BIBREF6 to induce sentence representations, and train a model using these as features. Following BIBREF6 , we then experiment with an ensemble model that combines the `universal' and task-specific representations to predict annotation difficulty. We expect these universal embeddings to capture general, high-level semantics, and the task specific representations to capture more granular information. Figure 2 depicts the model architecture. Sentences are fed into both the universal sentence encoder and, separately, a task specific neural encoder, yielding two representations. We concatenate these and pass the combined vector to the regression layer.
Experimental Setup and Results
We trained models for each label type separately. Word embeddings were initialized to 300d GloVe vectors BIBREF21 trained on common crawl data; these are fine-tuned during training. We used the Adam optimizer BIBREF22 with learning rate and decay set to 0.001 and 0.99, respectively. We used batch sizes of 16.
We used the large version of the universal sentence encoder with a transformer BIBREF23 . We did not update the pretrained sentence encoder parameters during training. All hyperparamaters for all models (including hidden layers, hidden sizes, and dropout) were tuned using Vizier BIBREF24 via 10-fold cross validation on the training set maximizing for F1.
As a baseline, we also trained a linear Support-Vector Regression BIBREF25 model on $n$ -gram features ( $n$ ranges from 1 to 3).
Table 3 reports Pearson correlation coefficients between the predictions with each of the neural models and the ground truth difficulty scores. Rows 1-4 correspond to individual models, and row 5 reports the ensemble performance. Columns correspond to label type. Results from all models outperform the baseline SVR model: Pearson's correlation coefficients range from 0.550 to 0.622. The regression correlations are the lowest.
The RNN model realizes the strongest performance among the stand-alone (non-ensemble) models, outperforming variants that exploit CNN and USE representations. Combining the RNN and USE further improves results. We hypothesize that this is due to complementary sentence information encoded in universal representations.
For all models, correlations for Intervention and Outcomes are higher than for Population, which is expected given the difficulty distributions in Figure 1 . In these, the sentences are more uniformly distributed, with a fair number of difficult and easier sentences. By contrast, in Population there are a greater number of easy sentences and considerably fewer difficult sentences, which makes the difficulty ranking task particularly challenging.
Better IE with Difficulty Prediction
We next present experiments in which we attempt to use the predicted difficulty during training to improve models for information extraction of descriptions of Population, Interventions and Outcomes from medical article abstracts. We investigate two uses: (1) simply removing the most difficult sentences from the training set, and, (2) re-weighting the most difficult sentences.
We again use LSTM-CRF-Pattern as the base model and experimenting on the EBM-NLP corpus BIBREF5 . This is trained on either (1) the training set with difficult sentences removed, or (2) the full training set but with instances re-weighted in proportion to their predicted difficulty score. Following BIBREF5 , we use the Adam optimizer with learning rate of 0.001, decay 0.9, batch size 20 and dropout 0.5. We use pretrained 200d GloVe vectors BIBREF21 to initialize word embeddings, and use 100d hidden char representations. Each word is thus represented with 300 dimensions in total. The hidden size is 100 for the LSTM in the character representation component, and 200 for the LSTM in the information extraction component. We train for 15 epochs, saving parameters that achieve the best F1 score on a nested development set.
Removing Difficult Examples
We first evaluate changes in performance induced by training the sequence labeling model using less data by removing difficult sentences prior to training. The hypothesis here is that these difficult instances are likely to introduce more noise than signal. We used a cross-fold approach to predict sentence difficulties, training on 9/10ths of the data and scoring the remaining 1/10th at a time. We then sorted sentences by predicted difficulty scores, and experimented with removing increasing numbers of these (in order of difficulty) prior to training the LSTM-CRF-Pattern model.
Figure 3 shows the results achieved by the LSTM-CRF-Pattern model after discarding increasing amounts of the training data: the $x$ and $y$ axes correspond to the the percentage of data removed and F1 scores, respectively. We contrast removing sentences predicted to be difficult with removing them (a) randomly (i.i.d.), and, (b) in inverse order of predicted inter-annotator agreement. The agreement prediction model is trained exactly the same like difficult prediction model, with simply changing the difficult score to annotation agreement. F1 scores actually improve (marginally) when we remove the most difficult sentences, up until we drop 4% of the data for Population and Interventions, and 6% for Outcomes. Removing training points at i.i.d. random degrades performance, as expected. Removing sentences in order of disagreement seems to have similar effect as removing them by difficulty score when removing small amount of the data, but the F1 scores drop much faster when removing more data. These findings indicate that sentences predicted to be difficult are indeed noisy, to the extent that they do not seem to provide the model useful signal.
Re-weighting by Difficulty
We showed above that removing a small number of the most difficult sentences does not harm, and in fact modestly improves, medical IE model performance. However, using the available data we are unable to test if this will be useful in practice, as we would need additional data to determine how many difficult sentences should be dropped.
We instead explore an alternative, practical means of exploiting difficulty predictions: we re-weight sentences during training inversely to their predicted difficulty. Formally, we weight sentence $i$ with difficulty scores above $\tau $ according to: $1-a\cdot (d_i-\tau )/(1-\tau )$ , where $d_i$ is the difficulty score for sentence $i$ , and $a$ is a parameter codifying the minimum weight value. We set $\tau $ to 0.8 so as to only re-weight sentences with difficulty in the top 20th percentile, and we set $a$ to 0.5. The re-weighting is equivalent to down-sampling the difficult sentences. LSTM-CRF-Pattern is our base model.
Table 4 reports the precision, recall and F1 achieved both with and without sentence re-weighting. Re-weighting improves all metrics modestly but consistently. All F1 differences are statistically significant under a sign test ( $p<0.01$ ). The model with best precision is different for Patient, Intervention and Outcome labels. However re-weighting by difficulty does consistently yield the best recall for all three extraction types, with the most notable improvement for i and o, where recall improved by 10 percentage points. This performance increase translated to improvements in F1 across all types, as compared to the base model and to re-weighting by agreement.
Involving Expert Annotators
The preceding experiments demonstrate that re-weighting difficult sentences annotated by the crowd generally improves the extraction models. Presumably the performance is influenced by the annotation quality.
We now examine the possibility that the higher quality and more consistent annotations of domain experts on the difficult instances will benefit the extraction model. This simulates an annotation strategy in which we route difficult instances to domain experts and easier ones to crowd annotators. We also contrast the value of difficult data to that of an i.i.d. random sample of the same size, both annotated by experts.
Expert annotations of Random and Difficult Instances
We re-annotate by experts a subset of most difficult instances and the same number of random instances. As collecting annotations from experts is slow and expensive, we only re-annotate the difficult instances for the interventions extraction task. We re-annotate the abstracts which cover the sentences with predicted difficulty scores in the top 5 percentile. We rank the abstracts from the training set by the count of difficult sentences, and re-annotate the abstracts that contain the most difficult sentences. Constrained by time and budget, we select only 2000 abstracts for re-annotation; 1000 of these are top-ranked, and 1000 are randomly sampled. This re-annotation cost $3,000. We have released the new annotation data at: https://github.com/bepnye/EBM-NLP.
Following BIBREF5 , we recruited five medical experts via Up-work with advanced medical training and strong technical reading/writing skills. The expert annotator were asked to read the entire abstract and highlight, using the BRAT toolkit BIBREF26 , all spans describing medical Interventions. Each abstract is only annotated by one expert. We examined 30 re-annotated abstracts to ensure the annotation quality before hiring the annotator.
Table 5 presents the results of LSTM-CRF-Pattern model trained on the reannotated difficult subset and the random subset. The first two rows show the results for models trained with expert annotations. The model trained on random data has a slightly better F1 than that trained on the same amount of difficult data. The model trained on random data has higher precision but lower recall.
Rows 3 and 4 list the results for models trained on the same data but with crowd annotation. Models trained with expert-annotated data are clearly superior to those trained with crowd labels with respect to F1, indicating that the experts produced higher quality annotations. For crowdsourced annotations, training the model with data sampled at i.i.d. random achieves 2% higher F1 than when difficult instances are used. When expert annotations are used, this difference is less than 1%. This trend in performance may be explained by differences in annotation quality: the randomly sampled set was more consistently annotated by both experts and crowd because the difficult set is harder. However, in both cases expert annotations are better, with a bigger difference between the expert and crowd models on the difficult set.
The last row is the model trained on all 5k abstracts with crowd annotations. Its F1 score is lower than either expert model trained on only 20% of data, suggesting that expert annotations should be collected whenever possible. Again the crowd model on complete data has higher precision than expert models but its recall is much lower.
Routing To Experts or Crowd
So far a system was trained on one type of data, either labeled by crowd or experts. We now examine the performance of a system trained on data that was routed to either experts or crowd annotators depending on their predicted difficult. Given the results presented so far mixing annotators may be beneficial given their respective trade-offs of precision and recall. We use the annotations from experts for an abstract if it exists otherwise use crowd annotations. The results are presented in Table 6 .
Rows 1 and 2 repeat the performance of the models trained on difficult subset and random subset with expert annotations only respectively. The third row is the model trained by combining difficult and random subsets with expert annotations. There are around 250 abstracts in the overlap of these two sets, so there are total 1.75k abstracts used for training the D+R model. Rows 4 to 6 are the models trained on all 5k abstracts with mixed annotations, where Other means the rest of the abstracts with crowd annotation only.
The results show adding more training data with crowd annotation still improves at least 1 point F1 score in all three extraction tasks. The improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added. The model trained with re-annotating the difficult subset (D+Other) also outperforms the model with re-annotating the random subset (R+Other) by 2 points in F1. The model trained with re-annotating both of difficult and random subsets (D+R+Other), however, achieves only marginally higher F1 than the model trained with the re-annotated difficult subset (D+Other). In sum, the results clearly indicate that mixing expert and crowd annotations leads to better models than using solely crowd data, and better than using expert data alone. More importantly, there is greater gain in performance when instances are routed according to difficulty, as compared to randomly selecting the data for expert annotators. These findings align with our motivating hypothesis that annotation quality for difficult instances is important for final model performance. They also indicate that mixing annotations from expert and crowd could be an effective way to achieve acceptable model performance given a limited budget.
How Many Expert Annotations?
We established that crowd annotation are still useful in supplementing expert annotations for medical IE. Obtaining expert annotations for the one thousand most difficult instances greatly improved the model performance. However the choice of how many difficult instances to annotate was an uninformed choice. Here we check if less expert data would have yielded similar gains. Future work will need to address how best to choose this parameter for a routing system.
We simulate a routing scenario in which we send consecutive batches of the most difficult examples to the experts for annotation. We track changes in performance as we increase the number of most-difficult-articles sent to domain experts. As shown in Figure 4 , adding expert annotations for difficult articles consistently increases F1 scores. The performance gain is mostly from increased recall; the precision changes only a bit with higher quality annotation. This observation implies that crowd workers often fail to mark target tokens, but do not tend to produce large numbers of false positives. We suspect such failures to identify relevant spans/tokens are due to insufficient domain knowledge possessed by crowd workers.
The F1 score achieved after re-annotating the 600 most-difficult articles reaches 68.1%, which is close to the performance when re-annotating 1000 random articles. This demonstrates the effectiveness of recognizing difficult instances. The trend when we use up all expert data is still upward, so adding even more expert data is likely to further improve performance. Unfortunately we exhausted our budget and were not able to obtain additional expert annotations. It is likely that as the size of the expert annotations increases, the value of crowd annotations will diminish. This investigation is left for future work.
Conclusions
We have introduced the task of predicting annotation difficulty for biomedical information extraction (IE). We trained neural models using different learned representations to score texts in terms of their difficulty. Results from all models were strong with Pearson’s correlation coefficients higher than 0.45 in almost all evaluations, indicating the feasibility of this task. An ensemble model combining universal and task specific feature sentence vectors yielded the best results.
Experiments on biomedical IE tasks show that removing up to $\sim $ 10% of the sentences predicted to be most difficult did not decrease model performance, and that re-weighting sentences inversely to their difficulty score during training improves predictive performance. Simulations in which difficult examples are routed to experts and other instances to crowd annotators yields the best results, outperforming the strategy of randomly selecting data for expert annotation, and substantially improving upon the approach of relying exclusively on crowd annotations. In future work, routing strategies based on instance difficulty could be further investigated for budget-quality trade-off.
Acknowledgements
This work has been partially supported by NSF1748771 grant. Wallace was support in part by NIH/NLM R01LM012086.
|
How much data is needed to train the task-specific encoder?
|
57,505 sentences
| 4,371
|
qasper
|
8k
|
Introduction
The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.
Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.
Our contributions are the following:
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model.
Background ::: The Transformer
In NMT, the Transformer BIBREF0 is a sequence-to-sequence (seq2seq) model which maps an input sequence to an output sequence through hierarchical multi-head attention mechanisms, yielding a dynamic, context-dependent strategy for propagating information within and across sentences. It contrasts with previous seq2seq models, which usually rely either on costly gated recurrent operations BIBREF15, BIBREF16 or static convolutions BIBREF17.
Given $n$ query contexts and $m$ sequence items under consideration, attention mechanisms compute, for each query, a weighted representation of the items. The particular attention mechanism used in BIBREF0 is called scaled dot-product attention, and it is computed in the following way:
where $\mathbf {Q} \in \mathbb {R}^{n \times d}$ contains representations of the queries, $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{m \times d}$ are the keys and values of the items attended over, and $d$ is the dimensionality of these representations. The $\mathbf {\pi }$ mapping normalizes row-wise using softmax, $\mathbf {\pi }(\mathbf {Z})_{ij} = \operatornamewithlimits{\mathsf {softmax}}(\mathbf {z}_i)_j$, where
In words, the keys are used to compute a relevance score between each item and query. Then, normalized attention weights are computed using softmax, and these are used to weight the values of each item at each query context.
However, for complex tasks, different parts of a sequence may be relevant in different ways, motivating multi-head attention in Transformers. This is simply the application of Equation DISPLAY_FORM7 in parallel $H$ times, each with a different, learned linear transformation that allows specialization:
In the Transformer, there are three separate multi-head attention mechanisms for distinct purposes:
Encoder self-attention: builds rich, layered representations of each input word, by attending on the entire input sentence.
Context attention: selects a representative weighted average of the encodings of the input words, at each time step of the decoder.
Decoder self-attention: attends over the partial output sentence fragment produced so far.
Together, these mechanisms enable the contextualized flow of information between the input sentence and the sequential decoder.
Background ::: Sparse Attention
The softmax mapping (Equation DISPLAY_FORM8) is elementwise proportional to $\exp $, therefore it can never assign a weight of exactly zero. Thus, unnecessary items are still taken into consideration to some extent. Since its output sums to one, this invariably means less weight is assigned to the relevant items, potentially harming performance and interpretability BIBREF18. This has motivated a line of research on learning networks with sparse mappings BIBREF19, BIBREF20, BIBREF21, BIBREF22. We focus on a recently-introduced flexible family of transformations, $\alpha $-entmax BIBREF23, BIBREF14, defined as:
where $\triangle ^d \lbrace \mathbf {p}\in \mathbb {R}^d:\sum _{i} p_i = 1\rbrace $ is the probability simplex, and, for $\alpha \ge 1$, $\mathsf {H}^{\textsc {T}}_\alpha $ is the Tsallis continuous family of entropies BIBREF24:
This family contains the well-known Shannon and Gini entropies, corresponding to the cases $\alpha =1$ and $\alpha =2$, respectively.
Equation DISPLAY_FORM14 involves a convex optimization subproblem. Using the definition of $\mathsf {H}^{\textsc {T}}_\alpha $, the optimality conditions may be used to derive the following form for the solution (Appendix SECREF83):
where $[\cdot ]_+$ is the positive part (ReLU) function, $\mathbf {1}$ denotes the vector of all ones, and $\tau $ – which acts like a threshold – is the Lagrange multiplier corresponding to the $\sum _i p_i=1$ constraint.
Background ::: Sparse Attention ::: Properties of @!START@$\alpha $@!END@-entmax.
The appeal of $\alpha $-entmax for attention rests on the following properties. For $\alpha =1$ (i.e., when $\mathsf {H}^{\textsc {T}}_\alpha $ becomes the Shannon entropy), it exactly recovers the softmax mapping (We provide a short derivation in Appendix SECREF89.). For all $\alpha >1$ it permits sparse solutions, in stark contrast to softmax. In particular, for $\alpha =2$, it recovers the sparsemax mapping BIBREF19, which is piecewise linear. In-between, as $\alpha $ increases, the mapping continuously gets sparser as its curvature changes.
To compute the value of $\alpha $-entmax, one must find the threshold $\tau $ such that the r.h.s. in Equation DISPLAY_FORM16 sums to one. BIBREF23 propose a general bisection algorithm. BIBREF14 introduce a faster, exact algorithm for $\alpha =1.5$, and enable using $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ with fixed $\alpha $ within a neural network by showing that the $\alpha $-entmax Jacobian w.r.t. $\mathbf {z}$ for $\mathbf {p}^\star = \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ is
Our work furthers the study of $\alpha $-entmax by providing a derivation of the Jacobian w.r.t. the hyper-parameter $\alpha $ (Section SECREF3), thereby allowing the shape and sparsity of the mapping to be learned automatically. This is particularly appealing in the context of multi-head attention mechanisms, where we shall show in Section SECREF35 that different heads tend to learn different sparsity behaviors.
Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax
We now propose a novel Transformer architecture wherein we simply replace softmax with $\alpha $-entmax in the attention heads. Concretely, we replace the row normalization $\mathbf {\pi }$ in Equation DISPLAY_FORM7 by
This change leads to sparse attention weights, as long as $\alpha >1$; in particular, $\alpha =1.5$ is a sensible starting point BIBREF14.
Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax ::: Different @!START@$\alpha $@!END@ per head.
Unlike LSTM-based seq2seq models, where $\alpha $ can be more easily tuned by grid search, in a Transformer, there are many attention heads in multiple layers. Crucial to the power of such models, the different heads capture different linguistic phenomena, some of them isolating important words, others spreading out attention across phrases BIBREF0. This motivates using different, adaptive $\alpha $ values for each attention head, such that some heads may learn to be sparser, and others may become closer to softmax. We propose doing so by treating the $\alpha $ values as neural network parameters, optimized via stochastic gradients along with the other weights.
Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax ::: Derivatives w.r.t. @!START@$\alpha $@!END@.
In order to optimize $\alpha $ automatically via gradient methods, we must compute the Jacobian of the entmax output w.r.t. $\alpha $. Since entmax is defined through an optimization problem, this is non-trivial and cannot be simply handled through automatic differentiation; it falls within the domain of argmin differentiation, an active research topic in optimization BIBREF25, BIBREF26.
One of our key contributions is the derivation of a closed-form expression for this Jacobian. The next proposition provides such an expression, enabling entmax layers with adaptive $\alpha $. To the best of our knowledge, ours is the first neural network module that can automatically, continuously vary in shape away from softmax and toward sparse mappings like sparsemax.
Proposition 1 Let $\mathbf {p}^\star \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ be the solution of Equation DISPLAY_FORM14. Denote the distribution $\tilde{p}_i {(p_i^\star )^{2 - \alpha }}{ \sum _j(p_j^\star )^{2-\alpha }}$ and let $h_i -p^\star _i \log p^\star _i$. The $i$th component of the Jacobian $\mathbf {g} \frac{\partial \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})}{\partial \alpha }$ is
proof uses implicit function differentiation and is given in Appendix SECREF10.
Proposition UNKREF22 provides the remaining missing piece needed for training adaptively sparse Transformers. In the following section, we evaluate this strategy on neural machine translation, and analyze the behavior of the learned attention heads.
Experiments
We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations:
1.5-entmax: a Transformer with sparse entmax attention with fixed $\alpha =1.5$ for all heads. This is a novel model, since 1.5-entmax had only been proposed for RNN-based NMT models BIBREF14, but never in Transformers, where attention modules are not just one single component of the seq2seq model but rather an integral part of all of the model components.
$\alpha $-entmax: an adaptive Transformer with sparse entmax attention with a different, learned $\alpha _{i,j}^t$ for each head.
The adaptive model has an additional scalar parameter per attention head per layer for each of the three attention mechanisms (encoder self-attention, context attention, and decoder self-attention), i.e.,
and we set $\alpha _{i,j}^t = 1 + \operatornamewithlimits{\mathsf {sigmoid}}(a_{i,j}^t) \in ]1, 2[$. All or some of the $\alpha $ values can be tied if desired, but we keep them independent for analysis purposes.
Experiments ::: Datasets.
Our models were trained on 4 machine translation datasets of different training sizes:
[itemsep=.5ex,leftmargin=2ex]
IWSLT 2017 German $\rightarrow $ English BIBREF27: 200K sentence pairs.
KFTT Japanese $\rightarrow $ English BIBREF28: 300K sentence pairs.
WMT 2016 Romanian $\rightarrow $ English BIBREF29: 600K sentence pairs.
WMT 2014 English $\rightarrow $ German BIBREF30: 4.5M sentence pairs.
All of these datasets were preprocessed with byte-pair encoding BIBREF31, using joint segmentations of 32k merge operations.
Experiments ::: Training.
We follow the dimensions of the Transformer-Base model of BIBREF0: The number of layers is $L=6$ and number of heads is $H=8$ in the encoder self-attention, the context attention, and the decoder self-attention. We use a mini-batch size of 8192 tokens and warm up the learning rate linearly until 20k steps, after which it decays according to an inverse square root schedule. All models were trained until convergence of validation accuracy, and evaluation was done at each 10k steps for ro$\rightarrow $en and en$\rightarrow $de and at each 5k steps for de$\rightarrow $en and ja$\rightarrow $en. The end-to-end computational overhead of our methods, when compared to standard softmax, is relatively small; in training tokens per second, the models using $\alpha $-entmax and $1.5$-entmax are, respectively, $75\%$ and $90\%$ the speed of the softmax model.
Experiments ::: Results.
We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis. In the next section, we make use of this potential by exploring the learned internal mechanics of the self-attention heads.
Analysis
We conduct an analysis for the higher-resource dataset WMT 2014 English $\rightarrow $ German of the attention in the sparse adaptive Transformer model ($\alpha $-entmax) at multiple levels: we analyze high-level statistics as well as individual head behavior. Moreover, we make a qualitative analysis of the interpretability capabilities of our models.
Analysis ::: High-Level Statistics ::: What kind of @!START@$\alpha $@!END@ values are learned?
Figure FIGREF37 shows the learning trajectories of the $\alpha $ parameters of a selected subset of heads. We generally observe a tendency for the randomly-initialized $\alpha $ parameters to decrease initially, suggesting that softmax-like behavior may be preferable while the model is still very uncertain. After around one thousand steps, some heads change direction and become sparser, perhaps as they become more confident and specialized. This shows that the initialization of $\alpha $ does not predetermine its sparsity level or the role the head will have throughout. In particular, head 8 in the encoder self-attention layer 2 first drops to around $\alpha =1.3$ before becoming one of the sparsest heads, with $\alpha \approx 2$.
The overall distribution of $\alpha $ values at convergence can be seen in Figure FIGREF38. We can observe that the encoder self-attention blocks learn to concentrate the $\alpha $ values in two modes: a very sparse one around $\alpha \rightarrow 2$, and a dense one between softmax and 1.5-entmax . However, the decoder self and context attention only learn to distribute these parameters in a single mode. We show next that this is reflected in the average density of attention weight vectors as well.
Analysis ::: High-Level Statistics ::: Attention weight density when translating.
For any $\alpha >1$, it would still be possible for the weight matrices in Equation DISPLAY_FORM9 to learn re-scalings so as to make attention sparser or denser. To visualize the impact of adaptive $\alpha $ values, we compare the empirical attention weight density (the average number of tokens receiving non-zero attention) within each module, against sparse Transformers with fixed $\alpha =1.5$.
Figure FIGREF40 shows that, with fixed $\alpha =1.5$, heads tend to be sparse and similarly-distributed in all three attention modules. With learned $\alpha $, there are two notable changes: (i) a prominent mode corresponding to fully dense probabilities, showing that our models learn to combine sparse and dense attention, and (ii) a distinction between the encoder self-attention – whose background distribution tends toward extreme sparsity – and the other two modules, who exhibit more uniform background distributions. This suggests that perhaps entirely sparse Transformers are suboptimal.
The fact that the decoder seems to prefer denser attention distributions might be attributed to it being auto-regressive, only having access to past tokens and not the full sentence. We speculate that it might lose too much information if it assigned weights of zero to too many tokens in the self-attention, since there are fewer tokens to attend to in the first place.
Teasing this down into separate layers, Figure FIGREF41 shows the average (sorted) density of each head for each layer. We observe that $\alpha $-entmax is able to learn different sparsity patterns at each layer, leading to more variance in individual head behavior, to clearly-identified dense and sparse heads, and overall to different tendencies compared to the fixed case of $\alpha =1.5$.
Analysis ::: High-Level Statistics ::: Head diversity.
To measure the overall disagreement between attention heads, as a measure of head diversity, we use the following generalization of the Jensen-Shannon divergence:
where $\mathbf {p}_j$ is the vector of attention weights assigned by head $j$ to each word in the sequence, and $\mathsf {H}^\textsc {S}$ is the Shannon entropy, base-adjusted based on the dimension of $\mathbf {p}$ such that $JS \le 1$. We average this measure over the entire validation set. The higher this metric is, the more the heads are taking different roles in the model.
Figure FIGREF44 shows that both sparse Transformer variants show more diversity than the traditional softmax one. Interestingly, diversity seems to peak in the middle layers of the encoder self-attention and context attention, while this is not the case for the decoder self-attention.
The statistics shown in this section can be found for the other language pairs in Appendix SECREF8.
Analysis ::: Identifying Head Specializations
Previous work pointed out some specific roles played by different heads in the softmax Transformer model BIBREF33, BIBREF5, BIBREF9. Identifying the specialization of a head can be done by observing the type of tokens or sequences that the head often assigns most of its attention weight; this is facilitated by sparsity.
Analysis ::: Identifying Head Specializations ::: Positional heads.
One particular type of head, as noted by BIBREF9, is the positional head. These heads tend to focus their attention on either the previous or next token in the sequence, thus obtaining representations of the neighborhood of the current time step. In Figure FIGREF47, we show attention plots for such heads, found for each of the studied models. The sparsity of our models allows these heads to be more confident in their representations, by assigning the whole probability distribution to a single token in the sequence. Concretely, we may measure a positional head's confidence as the average attention weight assigned to the previous token. The softmax model has three heads for position $-1$, with median confidence $93.5\%$. The $1.5$-entmax model also has three heads for this position, with median confidence $94.4\%$. The adaptive model has four heads, with median confidences $95.9\%$, the lowest-confidence head being dense with $\alpha =1.18$, while the highest-confidence head being sparse ($\alpha =1.91$).
For position $+1$, the models each dedicate one head, with confidence around $95\%$, slightly higher for entmax. The adaptive model sets $\alpha =1.96$ for this head.
Analysis ::: Identifying Head Specializations ::: BPE-merging head.
Due to the sparsity of our models, we are able to identify other head specializations, easily identifying which heads should be further analysed. In Figure FIGREF51 we show one such head where the $\alpha $ value is particularly high (in the encoder, layer 1, head 4 depicted in Figure FIGREF37). We found that this head most often looks at the current time step with high confidence, making it a positional head with offset 0. However, this head often spreads weight sparsely over 2-3 neighboring tokens, when the tokens are part of the same BPE cluster or hyphenated words. As this head is in the first layer, it provides a useful service to the higher layers by combining information evenly within some BPE clusters.
For each BPE cluster or cluster of hyphenated words, we computed a score between 0 and 1 that corresponds to the maximum attention mass assigned by any token to the rest of the tokens inside the cluster in order to quantify the BPE-merging capabilities of these heads. There are not any attention heads in the softmax model that are able to obtain a score over $80\%$, while for $1.5$-entmax and $\alpha $-entmax there are two heads in each ($83.3\%$ and $85.6\%$ for $1.5$-entmax and $88.5\%$ and $89.8\%$ for $\alpha $-entmax).
Analysis ::: Identifying Head Specializations ::: Interrogation head.
On the other hand, in Figure FIGREF52 we show a head for which our adaptively sparse model chose an $\alpha $ close to 1, making it closer to softmax (also shown in encoder, layer 1, head 3 depicted in Figure FIGREF37). We observe that this head assigns a high probability to question marks at the end of the sentence in time steps where the current token is interrogative, thus making it an interrogation-detecting head. We also observe this type of heads in the other models, which we also depict in Figure FIGREF52. The average attention weight placed on the question mark when the current token is an interrogative word is $98.5\%$ for softmax, $97.0\%$ for $1.5$-entmax, and $99.5\%$ for $\alpha $-entmax.
Furthermore, we can examine sentences where some tendentially sparse heads become less so, thus identifying sources of ambiguity where the head is less confident in its prediction. An example is shown in Figure FIGREF55 where sparsity in the same head differs for sentences of similar length.
Related Work ::: Sparse attention.
Prior work has developed sparse attention mechanisms, including applications to NMT BIBREF19, BIBREF12, BIBREF20, BIBREF22, BIBREF34. BIBREF14 introduced the entmax function this work builds upon. In their work, there is a single attention mechanism which is controlled by a fixed $\alpha $. In contrast, this is the first work to allow such attention mappings to dynamically adapt their curvature and sparsity, by automatically adjusting the continuous $\alpha $ parameter. We also provide the first results using sparse attention in a Transformer model.
Related Work ::: Fixed sparsity patterns.
Recent research improves the scalability of Transformer-like networks through static, fixed sparsity patterns BIBREF10, BIBREF35. Our adaptively-sparse Transformer can dynamically select a sparsity pattern that finds relevant words regardless of their position (e.g., Figure FIGREF52). Moreover, the two strategies could be combined. In a concurrent line of research, BIBREF11 propose an adaptive attention span for Transformer language models. While their work has each head learn a different contiguous span of context tokens to attend to, our work finds different sparsity patterns in the same span. Interestingly, some of their findings mirror ours – we found that attention heads in the last layers tend to be denser on average when compared to the ones in the first layers, while their work has found that lower layers tend to have a shorter attention span compared to higher layers.
Related Work ::: Transformer interpretability.
The original Transformer paper BIBREF0 shows attention visualizations, from which some speculation can be made of the roles the several attention heads have. BIBREF7 study the syntactic abilities of the Transformer self-attention, while BIBREF6 extract dependency relations from the attention weights. BIBREF8 find that the self-attentions in BERT BIBREF3 follow a sequence of processes that resembles a classical NLP pipeline. Regarding redundancy of heads, BIBREF9 develop a method that is able to prune heads of the multi-head attention module and make an empirical study of the role that each head has in self-attention (positional, syntactic and rare words). BIBREF36 also aim to reduce head redundancy by adding a regularization term to the loss that maximizes head disagreement and obtain improved results. While not considering Transformer attentions, BIBREF18 show that traditional attention mechanisms do not necessarily improve interpretability since softmax attention is vulnerable to an adversarial attack leading to wildly different model predictions for the same attention weights. Sparse attention may mitigate these issues; however, our work focuses mostly on a more mechanical aspect of interpretation by analyzing head behavior, rather than on explanations for predictions.
Conclusion and Future Work
We contribute a novel strategy for adaptively sparse attention, and, in particular, for adaptively sparse Transformers. We present the first empirical analysis of Transformers with sparse attention mappings (i.e., entmax), showing potential in both translation accuracy as well as in model interpretability.
In particular, we analyzed how the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence. Our adaptivity strategy relies only on gradient-based optimization, side-stepping costly per-head hyper-parameter searches. Further speed-ups are possible by leveraging more parallelism in the bisection algorithm for computing $\alpha $-entmax.
Finally, some of the automatically-learned behaviors of our adaptively sparse Transformers – for instance, the near-deterministic positional heads or the subword joining head – may provide new ideas for designing static variations of the Transformer.
Acknowledgments
This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundação para a Ciência e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal). We are grateful to Ben Peters for the $\alpha $-entmax code and Erick Fonseca, Marcos Treviso, Pedro Martins, and Tsvetomila Mihaylova for insightful group discussion. We thank Mathieu Blondel for the idea to learn $\alpha $. We would also like to thank the anonymous reviewers for their helpful feedback.
Supplementary Material
Background ::: Regularized Fenchel-Young prediction functions
Definition 1 (BIBREF23)
Let $\Omega \colon \triangle ^d \rightarrow {\mathbb {R}}\cup \lbrace \infty \rbrace $ be a strictly convex regularization function. We define the prediction function $\mathbf {\pi }_{\Omega }$ as
Background ::: Characterizing the @!START@$\alpha $@!END@-entmax mapping
Lemma 1 (BIBREF14) For any $\mathbf {z}$, there exists a unique $\tau ^\star $ such that
Proof: From the definition of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$,
we may easily identify it with a regularized prediction function (Def. UNKREF81):
We first note that for all $\mathbf {p}\in \triangle ^d$,
From the constant invariance and scaling properties of $\mathbf {\pi }_{\Omega }$ BIBREF23,
Using BIBREF23, noting that $g^{\prime }(t) = t^{\alpha - 1}$ and $(g^{\prime })^{-1}(u) = u^{{1}{\alpha -1}}$, yields
Since $\mathsf {H}^{\textsc {T}}_\alpha $ is strictly convex on the simplex, $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ has a unique solution $\mathbf {p}^\star $. Equation DISPLAY_FORM88 implicitly defines a one-to-one mapping between $\mathbf {p}^\star $ and $\tau ^\star $ as long as $\mathbf {p}^\star \in \triangle $, therefore $\tau ^\star $ is also unique.
Background ::: Connections to softmax and sparsemax
The Euclidean projection onto the simplex, sometimes referred to, in the context of neural attention, as sparsemax BIBREF19, is defined as
The solution can be characterized through the unique threshold $\tau $ such that $\sum _i \operatornamewithlimits{\mathsf {sparsemax}}(\mathbf {z})_i = 1$ and BIBREF38
Thus, each coordinate of the sparsemax solution is a piecewise-linear function. Visibly, this expression is recovered when setting $\alpha =2$ in the $\alpha $-entmax expression (Equation DISPLAY_FORM85); for other values of $\alpha $, the exponent induces curvature.
On the other hand, the well-known softmax is usually defined through the expression
which can be shown to be the unique solution of the optimization problem
where $\mathsf {H}^\textsc {S}(\mathbf {p}) -\sum _i p_i \log p_i$ is the Shannon entropy. Indeed, setting the gradient to 0 yields the condition $\log p_i = z_j - \nu _i - \tau - 1$, where $\tau $ and $\nu > 0$ are Lagrange multipliers for the simplex constraints $\sum _i p_i = 1$ and $p_i \ge 0$, respectively. Since the l.h.s. is only finite for $p_i>0$, we must have $\nu _i=0$ for all $i$, by complementary slackness. Thus, the solution must have the form $p_i = {\exp (z_i)}{Z}$, yielding Equation DISPLAY_FORM92.
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@
Recall that the entmax transformation is defined as:
where $\alpha \ge 1$ and $\mathsf {H}^{\textsc {T}}_{\alpha }$ is the Tsallis entropy,
and $\mathsf {H}^\textsc {S}(\mathbf {p}):= -\sum _j p_j \log p_j$ is the Shannon entropy.
In this section, we derive the Jacobian of $\operatornamewithlimits{\mathsf {entmax }}$ with respect to the scalar parameter $\alpha $.
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: General case of @!START@$\alpha >1$@!END@
From the KKT conditions associated with the optimization problem in Eq. DISPLAY_FORM85, we have that the solution $\mathbf {p}^{\star }$ has the following form, coordinate-wise:
where $\tau ^{\star }$ is a scalar Lagrange multiplier that ensures that $\mathbf {p}^{\star }$ normalizes to 1, i.e., it is defined implicitly by the condition:
For general values of $\alpha $, Eq. DISPLAY_FORM98 lacks a closed form solution. This makes the computation of the Jacobian
non-trivial. Fortunately, we can use the technique of implicit differentiation to obtain this Jacobian.
The Jacobian exists almost everywhere, and the expressions we derive expressions yield a generalized Jacobian BIBREF37 at any non-differentiable points that may occur for certain ($\alpha $, $\mathbf {z}$) pairs. We begin by noting that $\frac{\partial p_i^{\star }}{\partial \alpha } = 0$ if $p_i^{\star } = 0$, because increasing $\alpha $ keeps sparse coordinates sparse. Therefore we need to worry only about coordinates that are in the support of $\mathbf {p}^\star $. We will assume hereafter that the $i$th coordinate of $\mathbf {p}^\star $ is non-zero. We have:
We can see that this Jacobian depends on $\frac{\partial \tau ^{\star }}{\partial \alpha }$, which we now compute using implicit differentiation.
Let $\mathcal {S} = \lbrace i: p^\star _i > 0 \rbrace $). By differentiating both sides of Eq. DISPLAY_FORM98, re-using some of the steps in Eq. DISPLAY_FORM101, and recalling Eq. DISPLAY_FORM97, we get
from which we obtain:
Finally, plugging Eq. DISPLAY_FORM103 into Eq. DISPLAY_FORM101, we get:
where we denote by
The distribution $\tilde{\mathbf {p}}(\alpha )$ can be interpreted as a “skewed” distribution obtained from $\mathbf {p}^{\star }$, which appears in the Jacobian of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ w.r.t. $\mathbf {z}$ as well BIBREF14.
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Solving the indetermination for @!START@$\alpha =1$@!END@
We can write Eq. DISPLAY_FORM104 as
When $\alpha \rightarrow 1^+$, we have $\tilde{\mathbf {p}}(\alpha ) \rightarrow \mathbf {p}^{\star }$, which leads to a $\frac{0}{0}$ indetermination.
To solve this indetermination, we will need to apply L'Hôpital's rule twice. Let us first compute the derivative of $\tilde{p}_i(\alpha )$ with respect to $\alpha $. We have
therefore
Differentiating the numerator and denominator in Eq. DISPLAY_FORM107, we get:
with
and
When $\alpha \rightarrow 1^+$, $B$ becomes again a $\frac{0}{0}$ indetermination, which we can solve by applying again L'Hôpital's rule. Differentiating the numerator and denominator in Eq. DISPLAY_FORM112:
Finally, summing Eq. DISPLAY_FORM111 and Eq. DISPLAY_FORM113, we get
Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Summary
To sum up, we have the following expression for the Jacobian of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ with respect to $\alpha $:
|
What tasks are used for evaluation?
|
four machine translation tasks: German -> English, Japanese -> English, Romanian -> English, English -> German
| 4,898
|
qasper
|
8k
|
Introduction
Recently, deep neural network has been widely employed in various recognition tasks. Increasing the depth of neural network is a effective way to improve the performance, and convolutional neural network (CNN) has benefited from it in visual recognition task BIBREF0 . Deeper long short-term memory (LSTM) recurrent neural networks (RNNs) are also applied in large vocabulary continuous speech recognition (LVCSR) task, because LSTM networks have shown better performance than Fully-connected feed-forward deep neural network BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .
Training neural network becomes more challenge when it goes deep. A conceptual tool called linear classifier probe is introduced to better understand the dynamics inside a neural network BIBREF5 . The discriminating features of linear classifier is the hidden units of a intermediate layer. For deep neural networks, it is observed that deeper layer's accuracy is lower than that of shallower layers. Therefore, the tool shows the difficulty of deep neural model training visually.
Layer-wise pre-training is a successful method to train very deep neural networks BIBREF6 . The convergence becomes harder with increasing the number of layers, even though the model is initialized with Xavier or its variants BIBREF7 , BIBREF8 . But the deeper network which is initialized with a shallower trained network could converge well.
The size of LVCSR training dataset goes larger and training with only one GPU becomes high time consumption inevitably. Therefore, parallel training with multi-GPUs is more suitable for LVCSR system. Mini-batch based stochastic gradient descent (SGD) is the most popular method in neural network training procedure. Asynchronous SGD is a successful effort for parallel training based on it BIBREF9 , BIBREF10 . It can many times speed up the training time without decreasing the accuracy. Besides, synchronous SGD is another effective effort, where the parameter server waits for every works to finish their computation and sent their local models to it, and then it sends updated model back to all workers BIBREF11 . Synchronous SGD converges well in parallel training with data parallelism, and is also easy to implement.
In order to further improve the performance of deep neural network with parallel training, several methods are proposed. Model averaging method achieves linear speedup, as the final model is averaged from all parameters of local models in different workers BIBREF12 , BIBREF13 , but the accuracy decreases compared with single GPU training. Moreover, blockwise model-updating filter (BMUF) provides another almost linear speedup approach with multi-GPUs on the basis of model averaging. It can achieve improvement or no-degradation of recognition performance compared with mini-batch SGD on single GPU BIBREF14 .
Moving averaged (MA) approaches are also proposed for parallel training. It is demonstrated that the moving average of the parameters obtained by SGD performs as well as the parameters that minimize the empirical cost, and moving average parameters can be used as the estimator of them, if the size of training data is large enough BIBREF15 . One pass learning is then proposed, which is the combination of learning rate schedule and averaged SGD using moving average BIBREF16 . Exponential moving average (EMA) is proposed as a non-interference method BIBREF17 . EMA model is not broadcasted to workers to update their local models, and it is applied as the final model of entire training process. EMA method is utilized with model averaging and BMUF to further decrease the character error rate (CER). It is also easy to implement in existing parallel training systems.
Frame stacking can also speed up the training time BIBREF18 . The super frame is stacked by several regular frames, and it contains the information of them. Thus, the network can see multiple frames at a time, as the super frame is new input. Frame stacking can also lead to faster decoding.
For streaming voice search service, it needs to display intermediate recognition results while users are still speaking. As a result, the system needs to fulfill high real-time requirement, and we prefer unidirectional LSTM network rather than bidirectional one. High real-time requirement means low real time factor (RTF), but the RTF of deep LSTM model is higher inevitably. The dilemma of recognition accuracy and real-time requirement is an obstacle to the employment of deep LSTM network. Deep model outperforms because it contains more knowledge, but it is also cumbersome. As a result, the knowledge of deep model can be distilled to a shallow model BIBREF19 . It provided a effective way to employ the deep model to the real-time system.
In this paper, we explore a entire deep LSTM RNN training framework, and employ it to real-time application. The deep learning systems benefit highly from a large quantity of labeled training data. Our first and basic speech recognition system is trained on 17000 hours of Shenma voice search dataset. It is a generic dataset sampled from diverse aspects of search queries. The requirement of speech recognition system also addressed by specific scenario, such as map and navigation task. The labeled dataset is too expensive, and training a new model with new large dataset from the beginning costs lots of time. Thus, it is natural to think of transferring the knowledge from basic model to new scenario's model. Transfer learning expends less data and less training time than full training. In this paper, we also introduce a novel transfer learning strategy with segmental Minimum Bayes-Risk (sMBR). As a result, transfer training with only 1000 hours data can match equivalent performance for full training with 7300 hours data.
Our deep LSTM training framework for LVCSR is presented in Section 2. Section 3 describes how the very deep models does apply in real world applications, and how to transfer the model to another task. The framework is analyzed and discussed in Section 4, and followed by the conclusion in Section 5.
Layer-wise Training with Soft Target and Hard Target
Gradient-based optimization of deep LSTM network with random initialization get stuck in poor solution easily. Xavier initialization can partially solve this problem BIBREF7 , so this method is the regular initialization method of all training procedure. However, it does not work well when it is utilized to initialize very deep model directly, because of vanishing or exploding gradients. Instead, layer-wise pre-training method is a effective way to train the weights of very deep architecture BIBREF6 , BIBREF20 . In layer-wise pre-training procedure, a one-layer LSTM model is firstly trained with normalized initialization. Sequentially, two-layers LSTM model's first layer is initialized by trained one-layer model, and its second layer is regularly initialized. In this way, a deep architecture is layer-by-layer trained, and it can converge well.
In conventional layer-wise pre-training, only parameters of shallower network are transfered to deeper one, and the learning targets are still the alignments generated by HMM-GMM system. The targets are vectors that only one state's probability is one, and the others' are zeros. They are known as hard targets, and they carry limited knowledge as only one state is active. In contrast, the knowledge of shallower network should be also transfered to deeper one. It is obtained by the softmax layer of existing model typically, so each state has a probability rather than only zero or one, and called as soft target. As a result, the deeper network which is student network learns the parameters and knowledge from shallower one which is called teacher network. When training the student network from the teacher network, the final alignment is the combination of hard target and soft target in our layer-wise training phase. The final alignment provides various knowledge which transfered from teacher network and extracted from true labels. If only soft target is learned, student network perform no better than teacher network, but it could outperform teacher network as it also learns true labels.
The deeper network spends less time to getting the same level of original network than the network trained from the beginning, as a period of low performance is skipped. Therefore, training with hard and soft target is a time saving method. For large training dataset, training with the whole dataset still spends too much time. A network firstly trained with only a small part of dataset could go deeper as well, and so the training time reducing rapidly. When the network is deep enough, it then trained on the entire dataset to get further improvement. There is no gap of accuracy between these two approaches, but latter one saves much time.
Differential Saturation Check
The objects of conventional saturation check are gradients and the cell activations BIBREF4 . Gradients are clipped to range [-5, 5], while the cell activations clipped to range [-50, 50]. Apart from them, the differentials of recurrent layers is also limited. If the differentials go beyond the range, corresponding back propagation is skipped, while if the gradients and cell activations go beyond the bound, values are set as the boundary values. The differentials which are too large or too small will lead to the gradients easily vanishing, and it demonstrates the failure of this propagation. As a result, the parameters are not updated, and next propagation .
Sequence Discriminative Training
Cross-entropy (CE) is widely used in speech recognition training system as a frame-wise discriminative training criterion. However, it is not well suited to speech recognition, because speech recognition training is a sequential learning problem. In contrast, sequence discriminative training criterion has shown to further improve performance of neural network first trained with cross-entropy BIBREF21 , BIBREF22 , BIBREF23 . We choose state-level minimum bayes risk (sMBR) BIBREF21 among a number of sequence discriminative criterion is proposed, such as maximum mutual information (MMI) BIBREF24 and minimum phone error (MPE) BIBREF25 . MPE and sMBR are designed to minimize the expected error of different granularity of labels, while CE aims to minimizes expected frame error, and MMI aims to minimizes expected sentence error. State-level information is focused on by sMBR.
a frame-level accurate model is firstly trained by CE loss function, and then sMBR loss function is utilized for further training to get sequence-level accuracy. Only a part of training dataset is needed in sMBR training phase on the basis of whole dataset CE training.
Parallel Training
It is demonstrated that training with larger dataset can improve recognition accuracy. However, larger dataset means more training samples and more model parameters. Therefore, parallel training with multiple GPUs is essential, and it makes use of data parallelism BIBREF9 . The entire training data is partitioned into several split without overlapping and they are distributed to different GPUs. Each GPU trains with one split of training dataset locally. All GPUs synchronize their local models with model average method after a mini-batch optimization BIBREF12 , BIBREF13 .
Model average method achieves linear speedup in training phase, but the recognition accuracy decreases compared with single GPU training. Block-wise model updating filter (BMUF) is another successful effort in parallel training with linear speedup as well. It can achieve no-degradation of recognition accuracy with multi-GPUs BIBREF14 . In the model average method, aggregated model INLINEFORM0 is computed and broadcasted to GPUs. On the basis of it, BMUF proposed a novel model updating strategy: INLINEFORM1 INLINEFORM2
Where INLINEFORM0 denotes model update, and INLINEFORM1 is the global-model update. There are two parameters in BMUF, block momentum INLINEFORM2 , and block learning rate INLINEFORM3 . Then, the global model is updated as INLINEFORM4
Consequently, INLINEFORM0 is broadcasted to all GPUs to initial their local models, instead of INLINEFORM1 in model average method.
Averaged SGD is proposed to further accelerate the convergence speed of SGD. Averaged SGD leverages the moving average (MA) INLINEFORM0 as the estimator of INLINEFORM1 BIBREF15 : INLINEFORM2
Where INLINEFORM0 is computed by model averaging or BMUF. It is shown that INLINEFORM1 can well converge to INLINEFORM2 , with the large enough training dataset in single GPU training. It can be considered as a non-interference strategy that INLINEFORM3 does not participate the main optimization process, and only takes effect after the end of entire optimization. However, for the parallel training implementation, each INLINEFORM4 is computed by model averaging and BMUF with multiple models, and moving average model INLINEFORM5 does not well converge, compared with single GPU training.
Model averaging based methods are employed in parallel training of large scale dataset, because of their faster convergence, and especially no-degradation implementation of BMUF. But combination of model averaged based methods and moving average does not match the expectation of further enhance performance and it is presented as INLINEFORM0
The weight of each INLINEFORM0 is equal in moving average method regardless the effect of temporal order. But INLINEFORM1 closer to the end of training achieve higher accuracy in the model averaging based approach, and thus it should be with more proportion in final INLINEFORM2 . As a result, exponential moving average(EMA) is appropriate, which the weight for each older parameters decrease exponentially, and never reaching zero. After moving average based methods, the EMA parameters are updated recursively as INLINEFORM3
Here INLINEFORM0 represents the degree of weight decrease, and called exponential updating rate. EMA is also a non-interference training strategy that is implemented easily, as the updated model is not broadcasted. Therefore, there is no need to add extra learning rate updating approach, as it can be appended to existing training procedure directly.
Deployment
There is a high real time requirement in real world application, especially in online voice search system. Shenma voice search is one of the most popular mobile search engines in China, and it is a streaming service that intermediate recognition results displayed while users are still speaking. Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition.
Distillation
It is demonstrated that deep neural network architecture can achieve improvement in LVCSR. However, it also leads to much more computation and higher RTF, so that the recognition result can not be displayed in real time. It should be noted that deeper neural network contains more knowledge, but it is also cumbersome. the knowledge is key to improve the performance. If it can be transfered from cumbersome model to a small model, the recognition ability can also be transfered to the small model. Knowledge transferring to small model is called distillation BIBREF19 . The small model can perform as well as cumbersome model, after distilling. It provide a way to utilize high-performance but high RTF model in real time system. The class probability produced by the cumbersome model is regarded as soft target, and the generalization ability of cumbersome model is transfered to small model with it. Distillation is model's knowledge transferring approach, so there is no need to use the hard target, which is different with the layer-wise training method.
9-layers unidirectional LSTM model achieves outstanding performance, but meanwhile it is too computationally expensive to allow deployment in real time recognition system. In order to ensure real-time of the system, the number of layers needs to be reduced. The shallower network can learn the knowledge of deeper network with distillation. It is found that RTF of 2-layers network is acceptable, so the knowledge is distilled from 9-layers well-trained model to 2-layers model. Table TABREF16 shows that distillation from 9-layers to 2-layers brings RTF decrease of relative 53%, while CER only increases 5%. The knowledge of deep network is almost transfered with distillation, Distillation brings promising RTF reduction, but only little knowledge of deep network is lost. Moreover, CER of 2-layers distilled LSTM decreases relative 14%, compared with 2-layers regular-trained LSTM.
Transfer Learning with sMBR
For a certain specific scenario, the model trained with the data recorded from it has better adaptation than the model trained with generic scenario. But it spends too much time training a model from the beginning, if there is a well-trained model for generic scenarios. Moreover, labeling a large quantity of training data in new scenario is both costly and time consuming. If a model transfer trained with smaller dataset can obtained the similar recognition accuracy compared with the model directly trained with larger dataset, it is no doubt that transfer learning is more practical. Since specific scenario is a subset of generic scenario, some knowledge can be shared between them. Besides, generic scenario consists of various conditions, so its model has greater robustness. As a result, not only shared knowledge but also robustness can be transfered from the model of generic scenario to the model of specific one.
As the model well trained from generic scenario achieves good performance in frame level classification, sequence discriminative training is required to adapt new model to specific scenario additionally. Moreover, it does not need alignment from HMM-GMM system, and it also saves amount of time to prepare alignment.
Training Data
A large quantity of labeled data is needed for training a more accurate acoustic model. We collect the 17000 hours labeled data from Shenma voice search, which is one of the most popular mobile search engines in China. The dataset is created from anonymous online users' search queries in Mandarin, and all audio file's sampling rate is 16kHz, recorded by mobile phones. This dataset consists of many different conditions, such as diverse noise even low signal-to-noise, babble, dialects, accents, hesitation and so on.
In the Amap, which is one of the most popular web mapping and navigation services in China, users can search locations and navigate to locations they want though voice search. To present the performance of transfer learning with sequence discriminative training, the model trained from Shenma voice search which is greneric scenario transfer its knowledge to the model of Amap voice search. 7300 hours labeled data is collected in the similar way of Shenma voice search data collection.
Two dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . The three sets are split according to speakers, in order to avoid utterances of same speaker appearing in three sets simultaneously. The test sets of Shenma and Amap voice search are called Shenma Test and Amap Test.
Experimental setup
LSTM RNNs outperform conventional RNNs for speech recognition system, especially deep LSTM RNNs, because of its long-range dependencies more accurately for temporal sequence conditions BIBREF26 , BIBREF23 . Shenma and Amap voice search is a streaming service that intermediate recognition results displayed while users are still speaking. So as for online recognition in real time, we prefer unidirectional LSTM model rather than bidirectional one. Thus, the training system is unidirectional LSTM-based.
A 26-dimensional filter bank and 2-dimensional pitch feature is extracted for each frame, and is concatenated with first and second order difference as the final input of the network. The super frame are stacked by 3 frames without overlapping. The architecture we trained consists of two LSTM layers with sigmoid activation function, followed by a full-connection layer. The out layer is a softmax layer with 11088 hidden markov model (HMM) tied-states as output classes, the loss function is cross-entropy (CE). The performance metric of the system in Mandarin is reported with character error rate (CER). The alignment of frame-level ground truth is obtained by GMM-HMM system. Mini-batched SGD is utilized with momentum trick and the network is trained for a total of 4 epochs. The block learning rate and block momentum of BMUF are set as 1 and 0.9. 5-gram language model is leveraged in decoder, and the vocabulary size is as large as 760000. Differentials of recurrent layers is limited to range [-10000,10000], while gradients are clipped to range [-5, 5] and cell activations clipped to range [-50, 50]. After training with CE loss, sMBR loss is employed to further improve the performance.
It has shown that BMUF outperforms traditional model averaging method, and it is utilized at the synchronization phase. After synchronizing with BMUF, EMA method further updates the model in non-interference way. The training system is deployed on the MPI-based HPC cluster where 8 GPUs. Each GPU processes non-overlap subset split from the entire large scale dataset in parallel.
Local models from distributed workers synchronize with each other in decentralized way. In the traditional model averaging and BMUF method, a parameter server waits for all workers to send their local models, aggregate them, and send the updated model to all workers. Computing resource of workers is wasted until aggregation of the parameter server done. Decentralized method makes full use of computing resource, and we employ the MPI-based Mesh AllReduce method. It is mesh topology as shown in Figure FIGREF12 . There is no centralized parameter server, and peer to peer communication is used to transmit local models between workers. Local model INLINEFORM0 of INLINEFORM1 -th worker in INLINEFORM2 workers cluster is split to INLINEFORM3 pieces INLINEFORM4 , and send to corresponding worker. In the aggregation phase, INLINEFORM5 -th worker computed INLINEFORM6 splits of model INLINEFORM7 and send updated model INLINEFORM8 back to workers. As a result, all workers participate in aggregation and no computing resource is dissipated. It is significant to promote training efficiency, when the size of neural network model is too large. The EMA model is also updated additionally, but not broadcasting it.
Results
In order to evaluate our system, several sets of experiments are performed. The Shenma test set including about 9000 samples and Amap test set including about 7000 samples contain various real world conditions. It simulates the majority of user scenarios, and can well evaluates the performance of a trained model. Firstly, we show the results of models trained with EMA method. Secondly, for real world applications, very deep LSTM is distilled to a shallow one, so as for lower RTF. The model of Amap is also needed to train for map and navigation scenarios. The performance of transfer learning from Shenma voice search to Amap voice search is also presented.
Layer-wise Training
In layer-wise training, the deeper model learns both parameters and knowledge from the shallower model. The deeper model is initialized by the shallower one, and its alignment is the combination of hard target and soft target of shallower one. Two targets have the same weights in our framework. The teacher model is trained with CE. For each layer-wise trained CE model, corresponding sMBR model is also trained, as sMBR could achieve additional improvement. In our framework, 1000 hours data is randomly selected from the total dataset for sMBR training. There is no obvious performance enhancement when the size of sMBR training dataset increases.
For very deep unidirectional LSTM initialized with Xavier initialization algorithm, 6-layers model converges well, but there is no further improvement with increasing the number of layers. Therefore, the first 6 layers of 7-layers model is initialized by 6-layers model, and soft target is provided by 6-layers model. Consequently, deeper LSTM is also trained in the same way. It should be noticed that the teacher model of 9-layers model is the 8-layers model trained by sMBR, while the other teacher model is CE model. As shown in Table TABREF15 , the layer-wise trained models always performs better than the models with Xavier initialization, as the model is deep. Therefore, for the last layer training, we choose 8-layers sMBR model as the teacher model instead of CE model. A comparison between 6-layers and 9-layers sMBR models shows that 3 additional layers of layer-wise training brings relative 12.6% decreasing of CER. It is also significant that the averaged CER of sMBR models with different layers decreases absolute 0.73% approximately compared with CE models, so the improvement of sequence discriminative learning is promising.
Transfer Learning
2-layers distilled model of Shenma voice search has shown a impressive performance on Shenma Test, and we call it Shenma model. It is trained for generic search scenario, but it has less adaptation for specific scenario like Amap voice search. Training with very large dataset using CE loss is regarded as improvement of frame level recognition accuracy, and sMBR with less dataset further improves accuracy as sequence discriminative training. If robust model of generic scenario is trained, there is no need to train a model with very large dataset, and sequence discriminative training with less dataset is enough. Therefore, on the basis of Shenma model, it is sufficient to train a new Amap model with small dataset using sMBR. As shown in Table TABREF18 , Shenma model presents the worst performance among three methods, since it does not trained for Amap scenario. 2-layers Shenma model further trained with sMBR achieves about 8.1% relative reduction, compared with 2-layers regular-trained Amap model. Both training sMBR datasets contain the same 1000 hours data. As a result, with the Shenma model, only about 14% data usage achieves lower CER, and it leads to great time and cost saving with less labeled data. Besides, transfer learning with sMBR does not use the alignment from the HMM-GMM system, so it also saves huge amount of time.
Conclusion
We have presented a whole deep unidirectional LSTM parallel training system for LVCSR. The recognition performance improves when the network goes deep. Distillation makes it possible that deep LSTM model transfer its knowledge to shallow model with little loss. The model could be distilled to 2-layers model with very low RTF, so that it can display the immediate recognition results. As a result, its CER decrease relatively 14%, compared with the 2-layers regular trained model. In addition, transfer learning with sMBR is also proposed. If a great model has well trained from generic scenario, only 14% of the size of training dataset is needed to train a more accuracy acoustic model for specific scenario. Our future work includes 1) finding more effective methods to reduce the CER by increasing the number of layers; 2) applying this training framework to Connectionist Temporal Classification (CTC) and attention-based neural networks.
|
which lstm models did they compare with?
|
Unidirectional LSTM networks with 2, 6, 7, 8, and 9 layers.
| 4,286
|
qasper
|
8k
|
Introduction
The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/presentational quality of the application), or answer selection in community question answering (quality = actionability + relevance of the answer to the question). In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not.
Automatic quality assessment has obvious benefits in terms of time savings and tractability in contexts where the volume of documents is large. In the case of dynamic documents (possibly with multiple authors), such as in the case of Wikipedia, it is particularly pertinent, as any edit potentially has implications for the quality label of that document (and around 10 English Wikipedia documents are edited per second). Furthermore, when the quality assessment task is decentralized (as in the case of Wikipedia and academic paper assessment), quality criteria are often applied inconsistently by different people, where an automatic document quality assessment system could potentially reduce inconsistencies and enable immediate author feedback.
Current studies on document quality assessment mainly focus on textual features. For example, BIBREF0 examine features such as the article length and the number of headings to predict the quality class of a Wikipedia article. In contrast to these studies, in this paper, we propose to combine text features with visual features, based on a visual rendering of the document. Figure 1 illustrates our intuition, relative to Wikipedia articles. Without being able to read the text, we can tell that the article in Figure 1 has higher quality than Figure 1 , as it has a detailed infobox, extensive references, and a variety of images. Based on this intuition, we aim to answer the following question: can we achieve better accuracy on document quality assessment by complementing textual features with visual features?
Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by BIBREF2 split into three sub-parts based on subject category. Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier. When we combine the two models, we achieve state-of-the-art results over 3/4 of our datasets.
This paper makes the following contributions:
All code and data associated with this research will be released on publication.
Related Work
A variety of approaches have been proposed for document quality assessment across different domains: Wikipedia article quality assessment, academic paper rating, content quality assessment in community question answering (cQA), and essay scoring. Among these approaches, some use hand-crafted features while others use neural networks to learn features from documents. For each domain, we first briefly describe feature-based approaches and then review neural network-based approaches. Wikipedia article quality assessment: Quality assessment of Wikipedia articles is a task that assigns a quality class label to a given Wikipedia article, mirroring the quality assessment process that the Wikipedia community carries out manually. Many approaches have been proposed that use features from the article itself, meta-data features (e.g., the editors, and Wikipedia article revision history), or a combination of the two. Article-internal features capture information such as whether an article is properly organized, with supporting evidence, and with appropriate terminology. For example, BIBREF3 use writing styles represented by binarized character trigram features to identify featured articles. BIBREF4 and BIBREF0 explore the number of headings, images, and references in the article. BIBREF5 use nine readability scores, such as the percentage of difficult words in the document, to measure the quality of the article. Meta-data features, which are indirect indicators of article quality, are usually extracted from revision history, and the interaction between editors and articles. For example, one heuristic that has been proposed is that higher-quality articles have more edits BIBREF6 , BIBREF7 . BIBREF8 use the percentage of registered editors and the total number of editors of an article. Article–editor dependencies have also been explored. For example, BIBREF9 use the authority of editors to measure the quality of Wikipedia articles, where the authority of editors is determined by the articles they edit. Deep learning approaches to predicting Wikipedia article quality have also been proposed. For example, BIBREF10 use a version of doc2vec BIBREF11 to represent articles, and feed the document embeddings into a four hidden layer neural network. BIBREF12 first obtain sentence representations by averaging words within a sentence, and then apply a biLSTM BIBREF13 to learn a document-level representation, which is combined with hand-crafted features as side information. BIBREF14 exploit two stacked biLSTMs to learn document representations.
Academic paper rating: Academic paper rating is a relatively new task in NLP/AI, with the basic formulation being to automatically predict whether to accept or reject a paper. BIBREF2 explore hand-crafted features, such as the length of the title, whether specific words (such as outperform, state-of-the-art, and novel) appear in the abstract, and an embedded representation of the abstract as input to different downstream learners, such as logistic regression, decision tree, and random forest. BIBREF15 exploit a modularized hierarchical convolutional neural network (CNN), where each paper section is treated as a module. For each paper section, they train an attention-based CNN, and an attentive pooling layer is applied to the concatenated representation of each section, which is then fed into a softmax layer.
Content quality assessment in cQA: Automatic quality assessment in cQA is the task of determining whether an answer is of high quality, selected as the best answer, or ranked higher than other answers. To measure answer content quality in cQA, researchers have exploited various features from different sources, such as the answer content itself, the answerer's profile, interactions among users, and usage of the content. The most common feature used is the answer length BIBREF16 , BIBREF17 , with other features including: syntactic and semantic features, such as readability scores. BIBREF18 ; similarity between the question and the answer at lexical, syntactic, and semantic levels BIBREF18 , BIBREF19 , BIBREF20 ; or user data (e.g., a user's status points or the number of answers written by the user). There have also been approaches using neural networks. For example, BIBREF21 combine CNN-learned representations with hand-crafted features to predict answer quality. BIBREF22 use a 2-dimensional CNN to learn the semantic relevance of an answer to the question, and apply an LSTM to the answer sequence to model thread context. BIBREF23 and BIBREF24 model the problem similarly to machine translation quality estimation, treating answers as competing translation hypotheses and the question as the reference translation, and apply neural machine translation to the problem. Essay scoring: Automated essay scoring is the task of assigning a score to an essay, usually in the context of assessing the language ability of a language learner. The quality of an essay is affected by the following four primary dimensions: topic relevance, organization and coherence, word usage and sentence complexity, and grammar and mechanics. To measure whether an essay is relevant to its “prompt” (the description of the essay topic), lexical and semantic overlap is commonly used BIBREF25 , BIBREF26 . BIBREF27 explore word features, such as the number of verb formation errors, average word frequency, and average word length, to measure word usage and lexical complexity. BIBREF28 use sentence structure features to measure sentence variety. The effects of grammatical and mechanic errors on the quality of an essay are measured via word and part-of-speech $n$ -gram features and “mechanics” features BIBREF29 (e.g., spelling, capitalization, and punctuation), respectively. BIBREF30 , BIBREF31 , and BIBREF32 use an LSTM to obtain an essay representation, which is used as the basis for classification. Similarly, BIBREF33 utilize a CNN to obtain sentence representation and an LSTM to obtain essay representation, with an attention layer at both the sentence and essay levels.
The Proposed Joint Model
We treat document quality assessment as a classification problem, i.e., given a document, we predict its quality class (e.g., whether an academic paper should be accepted or rejected). The proposed model is a joint model that integrates visual features learned through Inception V3 with textual features learned through a biLSTM. In this section, we present the details of the visual and textual embeddings, and finally describe how we combine the two. We return to discuss hyper-parameter settings and the experimental configuration in the Experiments section.
Visual Embedding Learning
A wide range of models have been proposed to tackle the image classification task, such as VGG BIBREF34 , ResNet BIBREF35 , Inception V3 BIBREF1 , and Xception BIBREF36 . However, to the best of our knowledge, there is no existing work that has proposed to use visual renderings of documents to assess document quality. In this paper, we use Inception V3 pretrained on ImageNet (“Inception” hereafter) to obtain visual embeddings of documents, noting that any image classifier could be applied to our task. The input to Inception is a visual rendering (screenshot) of a document, and the output is a visual embedding, which we will later integrate with our textual embedding.
Based on the observation that it is difficult to decide what types of convolution to apply to each layer (such as 3 $\times $ 3 or 5 $\times $ 5), the basic Inception model applies multiple convolution filters in parallel and concatenates the resulting features, which are fed into the next layer. This has the benefit of capturing both local features through smaller convolutions and abstracted features through larger convolutions. Inception is a hybrid of multiple Inception models of different architectures. To reduce computational cost, Inception also modifies the basic model by applying a 1 $\times $ 1 convolution to the input and factorizing larger convolutions into smaller ones.
Textual Embedding Learning
We adopt a bi-directional LSTM model to generate textual embeddings for document quality assessment, following the method of BIBREF12 (“biLSTM” hereafter). The input to biLSTM is a textual document, and the output is a textual embedding, which will later integrate with the visual embedding.
For biLSTM, each word is represented as a word embedding BIBREF37 , and an average-pooling layer is applied to the word embeddings to obtain the sentence embedding, which is fed into a bi-directional LSTM to generate the document embedding from the sentence embeddings. Then a max-pooling layer is applied to select the most salient features from the component sentences.
The Joint Model
The proposed joint model (“Joint” hereafter) combines the visual and textual embeddings (output of Inception and biLSTM) via a simple feed-forward layer and softmax over the document label set, as shown in Figure 2 . We optimize our model based on cross-entropy loss.
Experiments
In this section, we first describe the two datasets used in our experiments: (1) Wikipedia, and (2) arXiv. Then, we report the experimental details and results.
Datasets
The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles.
We randomly sampled 5,000 articles from each quality class and removed all redirect pages, resulting in a dataset of 29,794 articles. As the wikitext contained in each document contains markup relating to the document category such as {Featured Article} or {geo-stub}, which reveals the label, we remove such information. We additionally randomly partitioned this dataset into training, development, and test splits based on a ratio of 8:1:1. Details of the dataset are summarized in Table 1 .
We generate a visual representation of each document via a 1,000 $\times $ 2,000-pixel screenshot of the article via a PhantomJS script over the rendered version of the article, ensuring that the screenshot and wikitext versions of the article are the same version. Any direct indicators of document quality (such as the FA indicator, which is a bronze star icon in the top right corner of the webpage) are removed from the screenshot.
The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg). In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences). The median numbers of pages for papers in cs.ai, cs.cl, and cs.lg are 11, 10, and 12, respectively. To make sure each page in the PDF file has the same size in the screenshot, we crop the PDF file of a paper to the first 12; we pad the PDF file with blank pages if a PDF file has less than 12 pages, using the PyPDF2 Python package. We then use ImageMagick to convert the 12-page PDF file to a single 1,000 $\times $ 2,000 pixel screenshot. Table 2 details this dataset, where the “Accepted” column denotes the percentage of positive instances (accepted papers) in each subset.
Experimental Setting
As discussed above, our model has two main components — biLSTM and Inception— which generate textual and visual representations, respectively. For the biLSTM component, the documents are preprocessed as described in BIBREF12 , where an article is divided into sentences and tokenized using NLTK BIBREF38 . Words appearing more than 20 times are retained when building the vocabulary. All other words are replaced by the special UNK token. We use the pre-trained GloVe BIBREF39 50-dimensional word embeddings to represent words. For words not in GloVe, word embeddings are randomly initialized based on sampling from a uniform distribution $U(-1, 1)$ . All word embeddings are updated in the training process. We set the LSTM hidden layer size to 256. The concatenation of the forward and backward LSTMs thus gives us 512 dimensions for the document embedding. A dropout layer is applied at the sentence and document level, respectively, with a probability of 0.5.
For Inception, we adopt data augmentation techniques in the training with a “nearest” filling mode, a zoom range of 0.1, a width shift range of 0.1, and a height shift range of 0.1. As the original screenshots have the size of 1,000 $\times 2$ ,000 pixels, they are resized to 500 $\times $ 500 to feed into Inception, where the input shape is (500, 500, 3). A dropout layer is applied with a probability of 0.5. Then, a GlobalAveragePooling2D layer is applied, which produces a 2,048 dimensional representation.
For the Joint model, we get a representation of 2,560 dimensions by concatenating the 512 dimensional representation from the biLSTM with the 2,048 dimensional representation from Inception. The dropout layer is applied to the two components with a probability of 0.5. For biLSTM, we use a mini-batch size of 128 and a learning rate of 0.001. For both Inception and joint model, we use a mini-batch size of 16 and a learning rate of 0.0001. All hyper-parameters were set empirically over the development data, and the models were optimized using the Adam optimizer BIBREF40 .
In the training phase, the weights in Inception are initialized by parameters pretrained on ImageNet, and the weights in biLSTM are randomly initialized (except for the word embeddings). We train each model for 50 epochs. However, to prevent overfitting, we adopt early stopping, where we stop training the model if the performance on the development set does not improve for 20 epochs. For evaluation, we use (micro-)accuracy, following previous studies BIBREF5 , BIBREF2 .
Baseline Approaches
We compare our models against the following five baselines:
Majority: the model labels all test samples with the majority class of the training data.
Benchmark: a benchmark method from the literature. In the case of Wikipedia, this is BIBREF5 , who use structural features and readability scores as features to build a random forest classifier; for arXiv, this is BIBREF2 , who use hand-crafted features, such as the number of references and TF-IDF weighted bag-of-words in abstract, to build a classifier based on the best of logistic regression, multi-layer perception, and AdaBoost.
Doc2Vec: doc2vec BIBREF11 to learn document embeddings with a dimension of 500, and a 4-layer feed-forward classification model on top of this, with 2000, 1000, 500, and 200 dimensions, respectively.
biLSTM: first derive a sentence representation by averaging across words in a sentence, then feed the sentence representation into a biLSTM and a maxpooling layer over output sequence to learn a document level representation with a dimension of 512, which is used to predict document quality.
Inception $_{\text{fixed}}$ : the frozen Inception model, where only parameters in the last layer are fine-tuned during training.
The hyper-parameters of Benchmark, Doc2Vec, and biLSTM are based on the corresponding papers except that: (1) we fine-tune the feed forward layer of Doc2Vec on the development set and train the model 300 epochs on Wikipedia and 50 epochs on arXiv; (2) we do not use hand-crafted features for biLSTM as we want the baselines to be comparable to our models, and the main focus of this paper is not to explore the effects of hand-crafted features (e.g., see BIBREF12 ).
Experimental Results
Table 3 shows the performance of the different models over our two datasets, in the form of the average accuracy on the test set (along with the standard deviation) over 10 runs, with different random initializations.
On Wikipedia, we observe that the performance of biLSTM, Inception, and Joint is much better than that of all four baselines. Inception achieves 2.9% higher accuracy than biLSTM. The performance of Joint achieves an accuracy of 59.4%, which is 5.3% higher than using textual features alone (biLSTM) and 2.4% higher than using visual features alone (Inception). Based on a one-tailed Wilcoxon signed-rank test, the performance of Joint is statistically significant ( $p<0.05$ ). This shows that the textual and visual features complement each other, achieving state-of-the-art results in combination.
For arXiv, baseline methods Majority, Benchmark, and Inception $_{\text{fixed}}$ outperform biLSTM over cs.ai, in large part because of the class imbalance in this dataset (90% of papers are rejected). Surprisingly, Inception $_{\text{fixed}}$ is better than Majority and Benchmark over the arXiv cs.lg subset, which verifies the usefulness of visual features, even when only the last layer is fine-tuned. Table 3 also shows that Inception and biLSTM achieve similar performance on arXiv, showing that textual and visual representations are equally discriminative: Inception and biLSTM are indistinguishable over cs.cl; biLSTM achieves 1.8% higher accuracy over cs.lg, while Inception achieves 1.3% higher accuracy over cs.ai. Once again, the Joint model achieves the highest accuracy on cs.ai and cs.cl by combining textual and visual representations (at a level of statistical significance for cs.ai). This, again, confirms that textual and visual features complement each other, and together they achieve state-of-the-art results. On arXiv cs.lg, Joint achieves a 0.6% higher accuracy than Inception by combining visual features and textual features, but biLSTM achieves the highest accuracy. One characteristic of cs.lg documents is that they tend to contain more equations than the other two arXiv datasets, and preliminary analysis suggests that the biLSTM is picking up on a correlation between the volume/style of mathematical presentation and the quality of the document.
Analysis
In this section, we first analyze the performance of Inception and Joint. We also analyze the performance of different models on different quality classes. The high-level representations learned by different models are also visualized and discussed. As the Wikipedia test set is larger and more balanced than that of arXiv, our analysis will focus on Wikipedia.
Inception
To better understand the performance of Inception, we generated the gradient-based class activation map BIBREF41 , by maximizing the outputs of each class in the penultimate layer, as shown in Figure 3 . From Figure 3 and Figure 3 , we can see that Inception identifies the two most important regions (one at the top corresponding to the table of contents, and the other at the bottom, capturing both document length and references) that contribute to the FA class prediction, and a region in the upper half of the image that contributes to the GA class prediction (capturing the length of the article body). From Figure 3 and Figure 3 , we can see that the most important regions in terms of B and C class prediction capture images (down the left and right of the page, in the case of B and C), and document length/references. From Figure 3 and Figure 3 , we can see that Inception finds that images in the top right corner are the strongest predictor of Start class prediction, and (the lack of) images/the link bar down the left side of the document are the most important for Stub class prediction.
Joint
Table 4 shows the confusion matrix of Joint on Wikipedia. We can see that more than 50% of documents for each quality class are correctly classified, except for the C class where more documents are misclassified into B. Analysis shows that when misclassified, documents are usually misclassified into adjacent quality classes, which can be explained by the Wikipedia grading scheme, where the criteria for adjacent quality classes are more similar.
We also provide a breakdown of precision (“ $\mathcal {P}$ ”), recall (“ $\mathcal {R}$ ”), and F1 score (“ $\mathcal {F}_{\beta =1}$ ”) for biLSTM, Inception, and Joint across the quality classes in Table 5 . We can see that Joint achieves the highest accuracy in 11 out of 18 cases. It is also worth noting that all models achieve higher scores for FA, GA, and Stub articles than B, C and Start articles. This can be explained in part by the fact that FA and GA articles must pass an official review based on structured criteria, and in part by the fact that Stub articles are usually very short, which is discriminative for Inception, and Joint. All models perform worst on the B and C quality classes. It is difficult to differentiate B articles from C articles even for Wikipedia contributors. As evidence of this, when we crawled a new dataset including talk pages with quality class votes from Wikipedia contributors, we found that among articles with three or more quality labels, over 20% percent of B and C articles have inconsistent votes from Wikipedia contributors, whereas for FA and GA articles the number is only 0.7%.
We further visualize the learned document representations of biLSTM, Inception, and Joint in the form of a t-SNE plot BIBREF42 in Figure 4 . The degree of separation between Start and Stub achieved by Inception is much greater than for biLSTM, with the separation between Start and Stub achieved by Joint being the clearest among the three models. Inception and Joint are better than biLSTM at separating Start and C. Joint achieves slightly better performance than Inception in separating GA and FA. We can also see that it is difficult for all models to separate B and C, which is consistent with the findings of Tables 4 and 5 .
Conclusions
We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment.
|
How large is their data set?
|
a sample of 29,794 wikipedia articles and 2,794 arXiv papers
| 4,187
|
qasper
|
8k
|
Introduction
The use of RNNs in the field of Statistical Machine Translation (SMT) has revolutionised the approaches to automated translation. As opposed to traditional shallow SMT models, which require a lot of memory to run, these neural translation models require only a small fraction of memory used, about 5% BIBREF0 . Also, neural translation models are optimized such that every module is trained to jointly improve translation quality. With that being said, one of the main downsides of neural translation models is the heavy corpus requirement in order to ensure learning of deeper contexts. This is where the application of these encoder decoder architectures in translation to and/or from morphologically rich languages takes a severe hit.
For any language pair, the efficiency of an MT system depends on two major factors: the availability and size of parallel corpus used for training and the syntactic divergence between the two languages i.e morphological richness, word order differences, grammatical structure etc. BIBREF0 . The main differences between the languages stem from the fact that languages similar to English are predominantly fusional languages whereas many of the morphologically rich languages are agglutinative in nature. The nature of morphologically rich languages being structurally and semantically discordant from languages like English adds to the difficulty of SMT involving such languages.
In morphologically rich languages, any suffix can be added to any verb or noun to simply mean one specific thing about that particular word that the suffix commonly represents (agglutination). This means that there exists a lot of inflectional forms of the same noun and verb base words, conveying similar notions. For example, in Tamil, there are at least 30,000 inflectional forms of any given verb and about 5,000 forms of inflectional forms for any noun. The merged words carry information about part of speech (POS) tags, tense, plurality and so forth that are important for analyzing text for Machine Translation (MT). Not only are these hidden meanings not captured, the corresponding root words are trained as different units, thereby increasing the complexity of developing such MT systems BIBREF1 .
To add to the complexities of being a morphologically rich language, there are several factors unique to Tamil that make translation very difficult. The availability of parallel corpus for Tamil is very scarce. Most of the other models in the field of English–Tamil MT have made use of their own translation corpora that were manually created for the purposes of research. Most of these corpora are not available online for use.
Another issue specific to Tamil is the addition of suffix characters included to the words in the language for smoothness in pronunciation. These characters are of so many different types; there is a unique suffix for each and every consonant in the language. These suffixes degrade performance of MT because the same words with different such pronounciation-based suffixes will be taken as different words in training.
Also to take into consideration is the existence of two different forms of the language being used. Traditionally defined Tamil and its pronunciations aren't acoustically pleasing to use. There's no linguistic flow between syllables and its usage in verbal communication is time consuming. Therefore, there exists two forms of the language, the written form, rigid in structure and syntax, and the spoken form, in which the flow and pace of the language is given priority over syntax and correctness of spelling. This divide leads to the corpus having 2 different versions of the language that increase the vocabulary even with the same words. This can be evidently seen in the corpus between the sentences used in the Bible, which is in traditional Tamil and sentences from movie subtitles, being in spoken Tamil format.
To account for such difficulties, a trade-off between domain specificity and size of the corpus is integral in building an English–Tamil neural MT system.
Corpus
The corpus selected for this experiment was a combination of different corpora from various domains. The major part of the corpus was made up by the EnTam v2 corpus BIBREF2 . This corpus contained sentences taken from parallel news articles, English and Tamil bible corpus and movie subtitles. It also comprised of a tourism corpus that was obtained from TDIL (Technology Development for Indian Languages) and a corpus created from Tamil novels and short stories from AU-KBC, Anna university. The complete corpus consisted of 197,792 sentences. Fig. FIGREF20 shows the skinny shift and heatmap representations of the relativity between the sentences in terms of their sentence lengths.
An extra monolingual Tamil corpus, collated from various online sources was used for the word2vec embedding of the Tamil target language to enhance the richness of context of the word vectors. It was also used to create the language model for the phrase-based SMT model. This corpus contained 567,772 sentences and was self-collected by combining hundreds of ancient Tamil scriptures, novels and poems by accessing the websites of popular online ebook libraries in Python using the urllib package. Since the sources had Tamil text in different encodings, the encoding scheme was standardized to be UTF-8 for the entirety of the monolingual and parallel corpora using the chardet package. The corpora were cleaned for any stray special characters, unnecessary html tags and website URLs.
Word2Vec
The word embeddings of the source and target language sentences are used as initial vectors of the model to improve contextualization. The skip gram model of the word2vec algorithm optimizes the vectors by accounting for the average log probability of context words given a source word. DISPLAYFORM0
where k is the context window taken for the vectorization, INLINEFORM0 refers to the INLINEFORM1 word of the corpus and INLINEFORM2 is the size of the training corpus in terms of the number of words. Here, the probabily INLINEFORM3 is computed as a hierarchical softmax of the product of the transpose of the output vector of INLINEFORM4 and the input vector of INLINEFORM5 for each and every pair over the entire vocabulary. The processes of negative sampling and subsampling of frequent words that were used in the original model aren't used in this experiment BIBREF3 .
For the process of creating semantically meaningful word embeddings, a monolingual corpus of 569,772 Tamil sentences was used. This gave the vectors more contextual richness due to the increased size of the corpus as opposed to using just the bilingual corpus' target side sentences BIBREF3 .
In the experiment, the word2vec model was trained using a vector size of 100 to ensure that the bulk of the limited memory of the GPU will be used for the neural attention translation model. It has been shown that any size over that of 150 used for word vectorization gives similar results and that a size of 100 performs close to the model with 150-sized word vectors BIBREF7 . A standard size of 5 was used as window size and the model was trained over 7 worker threads simultaneously. A batch size of 50 words was used for training. The negative sampling was set at 1 as it is the nature of morphologically rich languages to have a lot of important words that don't occur more than once in the corpus. The gensim word2vec toolkit was used to implement this word embedding process BIBREF8 .
Neural Translation Model
The model used for translation is the one implemented by Bahdanau et al. Bahdanau2014. A bidirectional LSTM encoder first takes the source sentence and encodes it into a context vector which acts as input for the decoder. The decoder is attention-based where the hidden states of the decoder get as input the weighted sum of all the hidden layer outputs of the encoder alongwith the output of the previous hidden layer and the previously decoded word. This provides a contextual reference into the source language sentence BIBREF4 .
Neural Machine Translation models directly compute the probability of the target language sentence given the source language sentence, word by word for every time step. The model with a basic decoder without the attention module computes the log probability of target sentence given source sentence as the sum of log probabilities of every word given every word before that. The attention-based model, on the other hand, calculates: DISPLAYFORM0
where INLINEFORM0 is the number of words in the target sentence, INLINEFORM1 is the target sentence, INLINEFORM2 is the source sentence, INLINEFORM3 is the fixed length output vector of the encoder and INLINEFORM4 is the weighted sum of all the hidden layer outputs of the encoder at every time step. Both the encoder's output context vector and the weighted sum (known as attention vector) help to improve the quality of translation by enabling selective source sentence lookup.
The decoder LSTM computes: DISPLAYFORM0
where the probability is computed as a function of the decoder's output in the previous time step INLINEFORM0 , the hidden layer vector of the decoder in the current timestep INLINEFORM1 and the context vector from the attention mechanism INLINEFORM2 . The context vector INLINEFORM3 for time step INLINEFORM4 is computed as a weighted sum of the output of the entire sentence using a weight parameter INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 is the number of tokens in the source sentence, INLINEFORM1 refers to the value of the hidden layer of the encoder at time step INLINEFORM2 , and INLINEFORM3 is the alignment parameter. This parameter is calculated by means of a feed forward neural network to ensure that the alignment model is free from the difficulties of contextualization of long sentences into a single vector. The feed forward network is trained along with the neural translation model to jointly improve the performance of the translation. Mathematically, DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the softmax output of the result of the feedforward network, INLINEFORM1 is the hidden state value of the decoder at timestep INLINEFORM2 and INLINEFORM3 is the encoder's hidden layer annotation at timestep INLINEFORM4 . A concatenation of the forward and the reverse hidden layer parameters of the encoder is used at each step to compute the weights INLINEFORM5 for the attention mechanism. This is done to enable an overall context of the sentence, as opposed to a context of only all the previous words of the sentence for every word in consideration. Fig. FIGREF12 is the general architecture of the neural translation model without the Bidirectional LSTM encoder.
A global attention mechanism is preferred over local attention because the differences in the structures of the languages cannot be mapped efficiently to enable lookup into the right parts of the source sentence. Using local attention mechanism with a monotonic context lookup, where the region around INLINEFORM0 source word is looked up for the prediction of the INLINEFORM1 target word, is impractical because of the structural discordance between the English and Tamil sentences (see Figs. FIGREF37 and FIGREF44 ). The use of gaussian and other such distributions to facilitate local attention would also be inefficient because the existence of various forms of translations for the same source sentence involving morphological and structural variations that don't stay uniform through the entire corpus BIBREF5 .
The No Peepholes (NP) variant of the LSTM cell, formulated in Greff et al. greff2015lstm is used in this experiment as it proved to give the best results amongst all the variants of an LSTM cell. It is specified by means of a gated mechanism designed to ensure that the vanishing gradient problem is prevented. LSTM maintains its hidden layer in two components, the cell vector INLINEFORM0 and the actual hidden layer output vector INLINEFORM1 . The cell vector is ensured to never reach zero by means of a weighted sum of the previous layer's cell vector INLINEFORM2 regulated by the forget gate INLINEFORM3 and an activation of the weighted sum of the input INLINEFORM4 in the current timestep INLINEFORM5 and the previous timestep's hidden layer output vector INLINEFORM6 . The combination is similarly regulated by the input gate INLINEFORM7 . The hidden layer output is determined as an activation of the cell gate, regulated by the output gate INLINEFORM8 . The interplay between these two vectors ( INLINEFORM9 and INLINEFORM10 ) at every timestep ensures that the problem of vanishing gradients doesn't occur. The three gates are also formed as a sigmoid of the weighted sum of the previous hidden layer output INLINEFORM11 and the input in the current timestep INLINEFORM12 . The output generated out of the LSTM's hidden layer is specified as a weighted softmax over the hidden layer output INLINEFORM13 . The learnable parameters of an LSTM cell are all the weights INLINEFORM14 and the biases INLINEFORM15 . DISPLAYFORM0
The LSTM specified by equations 7 through 11 is the one used for the decoder of the model. The encoder uses a bidirectional RNN LSTM cell in which there are two hidden layer components INLINEFORM0 and INLINEFORM1 that contribute to the output INLINEFORM2 of each time step INLINEFORM3 . Both the components have their own sets of LSTM equations in such a way that INLINEFORM4 for every timestep is computed from the first timestep till the INLINEFORM5 token is reached and INLINEFORM6 is computed from the INLINEFORM7 timestep backwards until the first token is reached. All the five vectors of the two components are all exactly the same as the LSTM equations specified with one variation in the computation of the result. DISPLAYFORM0
Morphological Segmentation
The morphological segmentation used is a semi-supervised extension to the generative probabilistic model of maximizing the probability of a INLINEFORM0 prefix,root,postfix INLINEFORM1 recursive split up of words based on an exhaustive combination of all possible morphemes. The details of this model are specified and extensively studied in Kohonen et al. kohonen2010semi. The model parameters INLINEFORM2 include the morph type count, morph token count of training data, the morph strings and their counts. The model is trained by maximizing the Maximum A Posteriori (MAP) probability using Bayes' rule: DISPLAYFORM0
where INLINEFORM0 refers to every word in the training lexicon. The prior INLINEFORM1 is estimated using the Minimum Description Length(MDL) principle. The likelihood INLINEFORM2 is estimated as: DISPLAYFORM0
where INLINEFORM0 refers to the intermediate analyses and INLINEFORM1 refers to the INLINEFORM2 morpheme of word INLINEFORM3 .
An extension to the Viterbi algorithm is used for the decoding step based on exhaustive mapping of morphemes. To account for over-segmentation and under-segmentation issues associated with unsupervised morphological segmentation, extra parameters ( INLINEFORM0 ) and ( INLINEFORM1 ) are used with the cost function INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 is the likelihood of the cost function, INLINEFORM1 describes the likelihood of contribution of the annotated dataset to the cost function and INLINEFORM2 is the likelihood of the labeled data. A decrease in the value of INLINEFORM3 will cause smaller segments and vice versa. INLINEFORM4 takes care of size discrepancies due to reduced availability of annotated corpus as compared to the training corpus BIBREF2 , BIBREF6 .
The Python extension to the morphological segmentation tool morfessor 2.0 was used for this experiment to perform the segmentation. The annotation data for Tamil language collated and released by Anoop Kunchukkutan in the Indic NLP Library was used as the semi-supervised input to the model BIBREF9 , BIBREF6 .
Experiment
The complexities of neural machine translation of morphologically rich languages were studied with respect to English to Tamil machine translation using the RNN LSTM Bi-directional encoder attention decoder architecture. To compare with a baseline system, a phrase based SMT system was implemented using the same corpus. The Factored SMT model with source-side preprocessing by Kumar et al. kumar2014improving was used as a reference for the translation between these language pairs. Also, an additional 569,772 monolingual Tamil sentences were used for the language model of the SMT system. The model used could be split up into various modules as expanded in Fig. FIGREF17 .
Bucketing
The input source and target language sentences used for training were taken and divided into bucketed pairs of sentences of a fixed number of sizes. This relationship was determined by examining the distribution of words in the corpus primarily to minimize the number of PAD tokens in the sentence. The heat map of the number of words in the English–Tamil sentence pairs of the corpus revealed that the distribution is centered around the 10–20 words region. Therefore, more buckets in that region were applied as there would be enough number of examples in each of these bucket pairs for the model to learn about the sentences in each and every bucket. The exact scheme used for the RNNSearch models is specified by Fig. FIGREF21 . The bucketing scheme for the RNNMorph model, involving morphs instead of words, was a simple shifted scheme of the one used in Fig. FIGREF21 , where every target sentence bucket count was increased uniformly by 5.
Model Details
Due to various computational constraints and lack of availability of comprehensive corpora, the vocabularies for English and Tamil languages for the RNNSearch model were restricted to 60,000 out of 67,768 and 150,000 out of 340,325 respectively. The vocabulary of the languages for the RNNMorph didn't have to be restricted and the actual number of words in the corpus i.e. 67,768 words for English and 41,906 words for Tamil could be accommodated into the training. Words not in the vocabulary from the test set input and output were replaced with the universal INLINEFORM0 UNK INLINEFORM1 token, symbolizing an unknown word. The LSTM hidden layer size, the training batch size, and the vocabulary sizes of the languages, together, acted as a bottleneck. The model was run on a 2GB NVIDIA GeForce GT 650M card with 384 cores and the memory allotment was constrained to the limits of the GPU. Therefore, after repeated experimentation, it was determined that with a batch size of 16, the maximum hidden layer size possible was 500, which was the size used. Attempts to reduce the batch size resulted in poor convergence, and so the parameters were set to center around the batch size of 16. The models used were of 4 layers of LSTM hidden units in the bidirectional encoder and attention decoder.
The model used a Stochastic Gradient Descent (SGD) optimization algorithm with a sampled softmax loss of 512 per sample to handle large vocabulary size of the target language BIBREF10 . The model was trained with a learning rate 1.0 and a decay of rate 0.5 enforced manually. Gradient clipping based on the global norm of 5.0 was carried out to prevent gradients exploding and going to unrecoverable values tending towards infinity. The model described is the one used in the Tensorflow BIBREF11 seq2seq library.
Results and Discussion
The BLEU metric parameters (modified 1-gram, 2-gram, 3-gram and 4-gram precision values) and human evaluation metrics of adequacy, fluency and relative ranking values were used to evaluate the performance of the models.
BLEU Evaluation
The BLEU scores obtained using the various models used in the experiment are tabulated in Table TABREF25 .
The BLEU metric computes the BLEU unigram, bigram, trigram and BLEU-4 modified precision values, each micro-averaged over the test set sentences BIBREF7 . It was observed, as expected, that the performance of the phrase-based SMT model was inferior to that of the RNNSearch model. The baseline RNNSearch system was further refined by using word2vec vectors to embed semantic understanding, as observed with the slight increase in the BLEU scores. Fig. FIGREF26 plots the BLEU scores as a line graph for visualization of the improvement in performance. Also, the 4-gram BLEU scores for the various models were plotted as a bar graph in Fig. FIGREF26
Due to the agglutinative and morphologically rich nature of the target language i.e. Tamil, the use of morphological segmentation to split the words into morphemes further improved the BLEU precision values in the RNNMorph model. One of the reasons for the large extent of increase in the BLEU score could be attributed to the overall increase in the number of word units per sentence. Since the BLEU score computes micro-average precision scores, an increase in both the numerator and denominator of the precision scores is apparent with an increase in the number of tokens due to morphological segmentation of the target language. Thus, the numeric extent of the increase of accuracy might not efficiently describe the improvement in performance of the translation.
Human Evaluation
To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models’ sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking. The intra-annotator values were computed for these metrics and the scores are shown in Table TABREF32 BIBREF12 , BIBREF13 .
The human evaluation Kappa co-efficient results are calculated with respect to: DISPLAYFORM0
It was observed that the ranking Kappa co-efficient for intra-annotator ranking of the RNNMorph model was at 0.573, higher that the 0.410 of the RNNSearch+Word2Vec model, implying that the annotators found the RNNMorph model to produce better results when compared to the RNNSearch + Word2Vec model.
Model Parameters
The learning rate decay through the training process of the RNNMorph model is showcased in the graph in Fig. FIGREF34 . This process was done manually where the learning rate was decayed after the end of specific epochs based on an observed stagnation in perplexity.The RNNMorph model achieved saturation of perplexities much earlier through the epochs than the RNNSearch + Word2Vec model. This conforms to the expected outcome as the morphological segmentation has reduced the vocabulary size of the target language from 340,325 words to a mere 41,906 morphs.
The error function used was the sampled SoftMax loss to ensure a large target vocabulary could be accommodated BIBREF10 . A zoomed inset graph (Fig. FIGREF35 ) has been used to visualize the values of the error function for the RNNSearch + Word2Vec and RNNMorph models with 4 hidden layers. It can be seen that the RNNMorph model is consistently better in terms of the perplexity values through the time steps.
Attention Vectors
In order to further demonstrate the quality of the RNNMorph model, the attention vectors of both the RNNSearch with Word2Vec embedding and RNNMorph models are compared for several good translations in Figs. FIGREF37 and FIGREF44 . It is observed that the reduction in vocabulary size has improved the source sentence lookup by quite an extent. Each cell in the heatmap displays the magnitude of the attention layer weight INLINEFORM0 for the INLINEFORM1 Tamil word and the INLINEFORM2 English word in the respective sentences. The intensity of black corresponds to the magnitude of the cell INLINEFORM3 . Also, the attention vectors of the RNNSearch model with Word2Vec embeddings tend to attend to INLINEFORM4 EOS INLINEFORM5 token in the middle of the sentence leading to incomplete translations. This could be due to the fact that only 44% of the Tamil vocabulary and 74% of the English vocabulary is taken for training in this model, as opposed to 100% of English and Tamil words in the RNNMorph model.
Target vocabulary size
A very large target vocabulary is an inadvertent consequence of the morphological richness of the Tamil language. This creates a potential restriction on the accuracy of the model as many inflectional forms of the same word are trained as independent units. One of the advantages of morphological segmentation of Tamil text is that the target vocabulary size decreased from 340,325 to a mere 41,906. This reduction helps improve the performance of the translation as the occurrence of unknown tokens was reduced compared to the RNNSearch model. This morphologically segmented vocabulary is divided into a collection of morphological roots and inflections as individual units.
Repetitions
Some of the translations of the RNNMorph model have repetitions of the same phrases (Fig. FIGREF53 ), whereas such repetitions occur much less frequently in the RNNSearch predictions. Such translations would make for good results if the repetitions weren't present and all parts of the sentence occur just once. These repetitions might be due to the increase in the general sequence length of the target sentences because of the morphological segmentation. While it is true the target vocabulary size has decreased due to morphological segmentation, the RNNMorph has more input units (morphs) per sentence, which makes it more demanding of the LSTM's memory units and the feed forward network of the attention model. Additionally, this behavior could also be attributed to the errors in the semi-supervised morphological segmentation due to the complexities of the Tamil language and the extent of the corpus.
Model Outputs
The translation outputs of the RNNSearch + Word2Vec and Morph2Vec models for the same input sentences from the test set demonstrate the effectiveness of using a morphological segmentation tool and how the morphemes have changed the sentence to be more grammatically sound. It is also observed (from Fig. FIGREF55 ) that most of the translation sentences of the Morph2Vec model have no INLINEFORM0 UNK INLINEFORM1 tokens. They exist in the predictions mostly only due to a word in the English test sentence not present in the source vocabulary.
Related Work
Professors CN Krishnan, Sobha et al developed a machine-aided-translation (MAT) system similar to the Anusaakara English Hindi MT system, using a small corpus and very few transfer rules, available at AU-KBC website BIBREF14 . Balajapally et al. balajapally2006multilingual developed an example based machine translation (EBMT) system with 700000 sentences for English to INLINEFORM0 Tamil, Kannada, Hindi INLINEFORM1 transliterated text BIBREF15 , BIBREF16 . Renganathan renganathan2002interactive developed a rule based MT system for English and Tamil using grammar rules for the language pair. Vetrivel et al. vetrivel2010english used HMMs to align and translate English and Tamil parallel sentences to build an SMT system. Irvine et al. irvine2013combining tried to combine parallel and similar corpora to improve the performance of English to Tamil SMT amongst other languages. Kasthuri et al. kasthuri2014rule used a rule based MT system using transfer lexicon and morphological analysis tools. Anglabharathi was developed at IIT Kanpur, a system translating English to a collection of Indian languages including Tamil using CFG like structures to create a pseudo target to convert to Indian languages BIBREF17 , BIBREF18 . A variety of hybrid approaches have also been used for English–Tamil MT in combinations of rule based (transfer methods), interlingua representations BIBREF19 , BIBREF20 , BIBREF21 . The use of Statistical Machine Translation took over the English–Tamil MT system research because of its desirable properties of language independence, better generalization features and a reduced requirement of linguistic expertise BIBREF1 , BIBREF22 , BIBREF23 . Various enhancement techniques external to the MT system have also been proposed to improve the performance of translation using morphological pre and post processing techniques BIBREF24 , BIBREF25 , BIBREF26 .
The use of RNN Encoder Decoder models in machine translation has shown good results in languages with similar grammatical structure. Deep MT systems have been performing better than the other shallow SMT models recently, with the availability of computational resources and hardware making it feasible to train such models. The first of these models came in 2014, with Cho et al SecondOneByCho. The model used was the RNN LSTM encoder decoder model with the context vector output of the encoder (run for every word in the sentence) is fed to every decoder unit along with the previous word output until INLINEFORM0 EOS INLINEFORM1 is reached. This model was used to score translation results of another MT system. Sutskever et al. sutskever2014sequence created a similar encoder decoder model with the decoder getting the context vector only for the first word of the target language sentence. After that, only the decoded target outputs act as inputs to the various time steps of the decoder. One major drawback of these models is the size of the context vector of the encoder being static in nature. The same sized vector was expected to to represent sentences of arbitrary length, which was impractical when it came to very long sentences.
The next breakthrough came from Bahdanau et al. Bahdanau2014 where variable length word vectors were used and instead of just the context vector, a weighted sum of the inputs is given for the decoder. This enabled selective lookup to the source sentence during decoding and is known as the attention mechanism BIBREF27 . The attention mechanism was further analysed by Luong et al. luong2015effective where they made a distinction between global and local attention by means of AER scores of the attention vectors. A Gaussian distribution and a monotonic lookup were used to facilitate the corresponding local source sentence look-up.
Conclusion
Thus, it is seen that the use of morphological segmentation on a morphologically rich language before translation helps with the performance of the translation in multiple ways. Thus, machine translation involving morphologically rich languages should ideally be carried out only after morphological segmentation. If the translation has to be carried out between two morphologically rich languages, then both the languages' sentences should be individually segmented based on morphology. This is because while it is true that they are both morphologically rich languages, the schemes that the languages use for the process of agglutination might be different, in which case a mapping between the units would be difficult without the segmentation.
One drawback of morphological segmentation is the increase in complexity of the model due to an increase in the average sentence lengths. This cannot be avoided as it is essential to enable a correspondence between the sentences of the two languages when one of them is a simple fusional language. Even with the increase in the average sentence length, the attention models that have been developed to ensure correctness of translation of long sequences can be put to good use when involving morphologically rich languages. Another point to note here is that morphologically rich languages like Tamil generally have lesser number of words per sentence than languages like English due to the inherent property of agglutination.
Future Work
The model implemented in this paper only includes source-side morphological segmentation and does not include a target side morphological agglutination to give back the output in words rather than morphemes. In order to implement an end-to-end translation system for morphologically rich languages, a morphological generator is essential because the output units of the translation cannot be morphemes.
The same model implemented can be further enhanced by means of a better corpus that can generalize over more than just domain specific source sentences. Also, the use of a better GPU would result in a better allocation of the hidden layer sizes and the batch sizes thereby possibly increasing the scope and accuracy of learning of the translation model.
Although not directly related to Machine Translation, the novel encoder– decoder architecture proposed in by Rocktaschel et al. rocktaschel2015reasoning for Natural Language Inference (NLI) can be used for the same. Their model fuses inferences from each and every individual word, summarizing information at each step, thereby linking the hidden state of the encoder with that of the decoder by means of a weighted sum, trained for optimization.
Acknowledgements
I would like to thank Dr. M. Anand Kumar, Assistant Professor, Amrita Vishwa Vidyapeetham for his continuous support and guidance. I would also like to thank Dr. Arvindan, Professor, SSN College Of Engineering for his inputs and suggestions.
|
How were the human judgements assembled?
|
50 human annotators ranked a random sample of 100 translations by Adequacy, Fluency and overall ranking on a 5-point scale.
| 5,344
|
qasper
|
8k
|
Introduction
Neural Machine Translation (NMT) has shown its effectiveness in translation tasks when NMT systems perform best in recent machine translation campaigns BIBREF0 , BIBREF1 . Compared to phrase-based Statistical Machine Translation (SMT) which is basically an ensemble of different features trained and tuned separately, NMT directly modeling the translation relationship between source and target sentences. Unlike SMT, NMT does not require much linguistic information and large monolingual data to achieve good performances.
An NMT consists of an encoder which recursively reads and represents the whole source sentence into a context vector and a recurrent decoder which takes the context vector and its previous state to predict the next target word. It is then trained in an end-to-end fashion to learn parameters which maximizes the likelihood between the outputs and the references. Recently, attention-based NMT has been featured in most state-of-the-art systems. First introduced by BIBREF2 , attention mechanism is integrated in decoder side as feedforward layers. It allows the NMT to decide which source words should take part in the predicting process of the next target words. It helps to improve NMTs significantly. Nevertheless, since the attention mechanism is specific to a particular source sentence and the considering target word, it is also specific to particular language pairs.
Some recent work has focused on extending the NMT framework to multilingual scenarios. By training such network using parallel corpora in number of different languages, NMT could benefit from additional information embedded in a common semantic space across languages. Basically, the proposed NMT are required to employ multiple encoders or multiple decoders to deal with multilinguality. Furthermore, in order to avoid the tight dependency of the attention mechanism to specific language pairs, they also need to modify their architecture to combine either the encoders or the attention layers. These modifications are specific to the purpose of the tasks as well. Thus, those multilingual NMTs are more complicated, much more free parameters to learn and more difficult to perform standard trainings compared to the original NMT.
In this paper, we introduce a unified approach to seamlessly extend the original NMT to multilingual settings. Our approach allows us to integrate any language in any side of the encoder-decoder architecture with only one encoder and one decoder for all the languages involved. Moreover, it is not necessary to do any network modification to enable attention mechanism in our NMT systems. We then apply our proprosed framework in two demanding scenarios: under-resourced translation and zero-resourced translation. The results show that bringing multilinguality to NMT helps to improve individual translations. With some insightful analyses of the results, we set our goal toward a fully multilingual NMT framework.
The paper starts with a detailed introduction to attention-based NMT. In Section SECREF3 , related work about multi-task NMT is reviewed. Section SECREF5 describes our proposed approach and thorough comparisons to the related work. It is followed by a section of evaluating our systems in two aforementioned scenarios, in which different strategies have been employed under a unified approach (Section SECREF4 ). Finally, the paper ends with conclusion and future work.
This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
Neural Machine Translation: Background
An NMT system consists of an encoder which automatically learns the characteristics of a source sentence into fix-length context vectors and a decoder that recursively combines the produced context vectors with the previous target word to generate the most probable word from a target vocabulary.
More specifically, a bidirectional recurrent encoder reads every words INLINEFORM0 of a source sentence INLINEFORM1 and encodes a representation INLINEFORM2 of the sentence into a fixed-length vector INLINEFORM3 concatinated from those of the forward and backward directions: INLINEFORM4
Here INLINEFORM0 is the one-hot vector of the word INLINEFORM1 and INLINEFORM2 is the word embedding matrix which is shared across the source words. INLINEFORM3 is the recurrent unit computing the current hidden state of the encoder based on the previous hidden state. INLINEFORM4 is then called an annotation vector, which encodes the source sentence up to the time INLINEFORM5 from both forward and backward directions. Recurrent units in NMT can be a simple recurrent neural network unit (RNN), a Long Short-Term Memory unit (LSTM) BIBREF3 or a Gated Recurrent Unit (GRU) BIBREF4
Similar to the encoder, the recurrent decoder generates one target word INLINEFORM0 to form a translated target sentence INLINEFORM1 in the end. At the time INLINEFORM2 , it takes the previous hidden state of the decoder INLINEFORM3 , the previous embedded word representation INLINEFORM4 and a time-specific context vector INLINEFORM5 as inputs to calculate the current hidden state INLINEFORM6 : INLINEFORM7
Again, INLINEFORM0 is the recurrent activation function of the decoder and INLINEFORM1 is the shared word embedding matrix of the target sentences. The context vector INLINEFORM2 is calculated based on the annotation vectors from the encoder. Before feeding the annotation vectors into the decoder, an attention mechanism is set up in between, in order to choose which annotation vectors should contribute to the predicting decision of the next target word. Intuitively, a relevance between the previous target word and the annotation vectors can be used to form some attention scenario. There exists several ways to calculate the relevance as shown in BIBREF5 , but what we describe here follows the proposed method of BIBREF2 DISPLAYFORM0
In BIBREF2 , this attention mechanism, originally called alignment model, has been employed as a simple feedforward network with the first layer is a learnable layer via INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . The relevance scores INLINEFORM3 are then normalized into attention weights INLINEFORM4 and the context vector INLINEFORM5 is calculated as the weighted sum of all annotation vectors INLINEFORM6 . Depending on how much attention the target word at time INLINEFORM7 put on the source states INLINEFORM8 , a soft alignment is learned. By being employed this way, word alignment is not a latent variable but a parametrized function, making the alignment model differentiable. Thus, it could be trained together with the whole architecture using backpropagation.
One of the most severe problems of NMT is handling of the rare words, which are not in the short lists of the vocabularies, i.e. out-of-vocabulary (OOV) words, or do not appear in the training set at all. In BIBREF6 , the rare target words are copied from their aligned source words after the translation. This heuristic works well with OOV words and named entities but unable to translate unseen words. In BIBREF7 , their proposed NMT models have been shown to not only be effective on reducing vocabulary sizes but also have the ability to generate unseen words. This is achieved by segmenting the rare words into subword units and translating them. The state-of-the-art translation systems essentially employ subword NMT BIBREF7 .
Universal Encoder and Decoder for Multilingual Neural Machine Translation
While the majority of previous research has focused on improving the performance of NMT on individual language pairs with individual NMT systems, recent work has started investigating potential ways to conduct the translation involved in multiple languages using a single NMT system. The possible reason explaining these efforts lies on the unique architecture of NMT. Unlike SMT, NMT consists of separated neural networks for the source and target sides, or the encoder and decoder, respectively. This allows these components to map a sentence in any language to a representation in an embedding space which is believed to share common semantics among the source languages involved. From that shared space, the decoder, with some implicit or explicit relevant constraints, could transform the representation into a concrete sentence in any desired language. In this section, we review some related work on this matter. We then describe a unified approach toward an universal attention-based NMT scheme. Our approach does not require any architecture modification and it can be trained to learn a minimal number of parameters compared to the other work.
Related Work
By extending the solution of sequence-to-sequence modeling using encoder-decoder architectures to multi-task learning, Luong2016 managed to achieve better performance on some INLINEFORM0 tasks such as translation, parsing and image captioning compared to individual tasks. Specifically in translation, the work utilizes multiple encoders to translate from multiple languages, and multiple decoders to translate to multiple languages. In this view of multilingual translation, each language in source or target side is modeled by one encoder or decoder, depending on the side of the translation. Due to the natural diversity between two tasks in that multi-task learning scenario, e.g. translation and parsing, it could not feature the attention mechanism although it has proven its effectiveness in NMT. There exists two directions which proposed for multilingual translation scenarios where they leverage the attention mechanism. The first one is indicated in the work from BIBREF8 , where it introduce an one-to-many multilingual NMT system to translates from one source language into multiple target languages. Having one source language, the attention mechanism is then handed over to the corresponding decoder. The objective function is changed to adapt to multilingual settings. In testing time, the parameters specific to a desired language pair are used to perform the translation.
Firat2016 proposed another approach which genuinely delivers attention-based NMT to multilingual translation. As in BIBREF9 , their approach utilizes one encoder per source language and one decoder per target language for many-to-many translation tasks. Instead of a quadratic number of independent attention layers, however, one single attention mechanism is integrated into their NMT, performing an affine transformation between the hidden layer of INLINEFORM0 source languages and that one of INLINEFORM1 target languages. It is required to change their architecture to accomodate such a complicated shared attention mechanism.
In a separate effort to achieve multilingual NMT, the work of Zoph2016 leverages available parallel data from other language pairs to help reducing possible ambiguities in the translation process into a single target language. They employed the multi-source attention-based NMT in a way that only one attention mechanism is required despite having multiple encoders. To achieve this, the outputs of the encoders were combined before feeding to the attention layer. They implemented two types of encoder combination; One is adding a non-linear layer on the concatenation of the encoders' hidden states. The other is using a variant of LSTM taking the respective gate values from the individual LSTM units of the encoders. As a result, the combined hidden states contain information from both encoders , thus encode the common semantic of the two source languages.
Universal Encoder and Decoder
Inspired by the multi-source NMT as additional parallel data in several languages are expected to benefit single translations, we aim to develop a NMT-based approach toward an universal framework to perform multilingual translation. Our solution features two treatments: 1) Coding the words in different languages as different words in the language-mixed vocabularies and 2) Forcing the NMT to translating a representation of source sentences into the sentences in a desired target language.
Language-specific Coding. When the encoder of a NMT system considers words across languages as different words, with a well-chosen architecture, it is expected to be able to learn a good representation of the source words in an embedding space in which words carrying similar meaning would have a closer distance to each others than those are semantically different. This should hold true when the words have the same or similar surface form, such as (@de@Obama; @en@Obama) or (@de@Projektion; @en@projection). This should also hold true when the words have the same or similar meaning across languages, such as (@en@car; @en@automobile) or (@de@Flussufer; @en@bank). Our encoder then acts similarly to the one of multi-source approach BIBREF10 , collecting additional information from other sources for better translations, but with a much simpler embedding function. Unlike them, we need only one encoder, so we could reduce the number of parameters to learn. Furthermore, we neither need to change the network architecture nor depend on which recurrent unit (GRU, LSTM or simple RNN) is currently using in the encoder.
We could apply the same trick to the target sentences and thus enable many-to-many translation capability of our NMT system. Similar to the multi-target translation BIBREF8 , we exploit further the correlation in semantics of those target sentences across different languages. The main difference between our approach and the work of BIBREF8 is that we need only one decoder for all target languages. Given one encoder for multiple source languages and one decoder for multiple target languages, it is trivial to incorporate the attention mechanism as in the case of a regular NMT for single language translation. In training, the attention layers were directed to learn relevant alignments between words in specific language pair and forward the produced context vector to the decoder. Now we rely totally on the network to learn good alignments between source and target sides. In fact, giving more information, our system are able to form nice alignments.
In comparison to other research that could perform complete multi-task learning, e.g. the work from BIBREF9 or the approach proposed by BIBREF11 , our method is able to accommodate the attention layers seemlessly and easily. It also draws a clear distinction from those works in term of the complexity of the whole network: considerably less parameters to learn, thus reduces overfitting, with a conventional attention mechanism and a standard training procedure.
Target Forcing. While language-specific coding allows us to implement a multilingual attention-based NMT, there are two issues we have to consider before training the network. The first is that the number of rare words would increase in proportion with the number of languages involved. This might be solved by applying a rare word treatment method with appropriate awareness of the vocabularies' size. The second one is more problematic: Ambiguity level in the translation process definitely increases due to the additional introduction of words having the same or similar meaning across languages at both source and target sides. We deal with the problem by explicitly forcing the attention and translation to the direction that we prefer, expecting the information would limit the ambiguity to the scope of one language instead of all target languages. We realize this idea by adding at the beginning and at the end of every source sentences a special symbol indicating the language they would be translated into. For example, in a multilingual NMT, when a source sentence is German and the target language is English, the original sentence (already language-specific coded) is:
@de@darum @de@geht @de@es @de@in @de@meinem @de@Vortrag
Now when we force it to be translated into English, the target-forced sentence becomes:
<E> @de@darum @de@geht @de@es @de@in @de@meinem @de@Vortrag <E>
Due to the nature of recurrent units used in the encoder and decoder, in training, those starting symbols encourage the network learning the translation of following target words in a particular language pair. In testing time, information of the target language we provided help to limit the translated candidates, hence forming the translation in the desired language.
Figure FIGREF8 illustrates the essence of our approach. With two steps in the preprocessing phase, namely language-specific coding and target forcing, we are able to employ multilingual attention-based NMT without any special treatment in training such a standard architecture. Our encoder and attention-enable decoder can be seen as a shared encoder and decoder across languages, or an universal encoder and decoder. The flexibitily of our approach allow us to integrate any language into source or target side. As we will see in Section SECREF4 , it has proven to be extremely helpful not only in low-resourced scenarios but also in translation of well-resourced language pairs as it provides a novel way to make use of large monolingual corpora in NMT.
Evaluation
In this section, we describe the evaluation of our proposed approach in comparisons with the strong baselines using NMT in two scenarios: the translation of an under-resource language pair and the translation of a language pair that does not exist any paralled data at all.
Experimental Settings
Training Data. We choose WIT3's TED corpus BIBREF12 as the basis of our experiments since it might be the only high-quality parallel data of many low-resourced language pairs. TED is also multilingual in a sense that it includes numbers of talks which are commonly translated into many languages. In addition, we use a much larger corpus provided freely by WMT organizers when we evaluate the impact of our approach in a real machine translation campaign. It includes the paralled corpus extracted from the digital corpus of European Parliament (EPPS), the News Commentary (NC) and the web-crawled parallel data (CommonCrawl). While the number of sentences in popular TED corpora varies from 13 thousands to 17 thousands, the total number of sentences in those larger corpus is approximately 3 million sentences.
Neural Machine Translation Setup. All experiments have been conducted using NMT framework Nematus, Following the work of Sennrich2016a, subword segmentation is handled in the prepocessing phase using Byte-Pair Encoding (BPE). Excepts stated clearly in some experiments, we set the number of BPE merging operations at 39500 on the joint of source and target data. When training all NMT systems, we take out the sentence pairs exceeding 50-word length and shuffle them inside every minibatch. Our short-list vocabularies contain 40,000 most frequent words while the others are considered as rare words and applied the subword translation. We use an 1024-cell GRU layer and 1000-dimensional embeddings with dropout at every layer with the probability of 0.2 in the embedding and hidden layers and 0.1 in the input and ourput layers. We trained our systems using gradient descent optimization with Adadelta BIBREF13 on minibatches of size 80 and the gradient is rescaled whenever its norm exceed 1.0. All the trainings last approximately seven days if the early-stopping condition could not be reached. At a certain time, an external evaluation script on BLEU BIBREF14 is conducted on a development set to decide the early-stopping condition. This evaluation script has also being used to choose the model archiving the best BLEU on the development set instead of the maximal loglikelihood between the translations and target sentences while training. In translation, the framework produces INLINEFORM0 -best candidates and we then use a beam search with the beam size of 12 to get the best translation.
Under-resourced Translation
First, we consider the translation for an under-resourced pair of languages. Here a small portion of the large parallel corpus for English-German is used as a simulation for the scenario where we do not have much parallel data: Translating texts in English to German. We perform language-specific coding in both source and target sides. By accommodating the German monolingual data as an additional input (German INLINEFORM0 German), which we called the mix-source approach, we could enrich the training data in a simple, natural way. Given this under-resourced situation, it could help our NMT obtain a better representation of the source side, hence, able to learn the translation relationship better. Including monolingual data in this way might also improve the translation of some rare word types such as named entities. Furthermore, as the ultimate goal of our work, we would like to investigate the advantages of multilinguality in NMT. We incorporate a similar portion of French-German parallel corpus into the English-German one. As discussed in Section SECREF5 , it is expected to help reducing the ambiguity in translation between one language pair since it utilizes the semantic context provided by the other source language. We name this mix-multi-source.
Table TABREF16 summarizes the performance of our systems measured in BLEU on two test sets, tst2013 and tst2014. Compared to the baseline NMT system which is solely trained on TED English-German data, our mix-source system achieves a considerable improvement of 2.6 BLEU points on tst2013 and 2.1 BLEU points on and tst2014 . Adding French data to the source side and their corresponding German data to the target side in our mix-multi-source system also help to gain 2.2 and 1.6 BLEU points more on tst2013 tst2014, respectively. We observe a better improvement from our mix-source system compared to our mix-multi-source system. We speculate the reason that the mix-source encoder utilize the same information shared in two languages while the mix-multi-source receives and processes similar information in the other language but not necessarily the same. We might validate this hypothesis by comparing two systems trained on a common English-German-French corpus of TED. We put it in our future work's plan.
As we expected Figure FIGREF19 shows how different words in different languages can be close in the shared space after being learned to translate into a common language. We extract the word embeddings from the encoder of the mix-multi-source (En,Fr INLINEFORM0 De,De) after training, remove the language-specific codes (@en@ and @fr@)and project the word vectors to the 2D space using t-SNE BIBREF15 .
Using large monolingual data in NMT.
A standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English INLINEFORM0 German NMT system, sennrich2016b built a separate German INLINEFORM1 English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data. gulcehre2015 trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying mix-source method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a mix-source NMT system from those data.
The first result is not encouraging when its performance is even worse than the baseline NMT which is trained on the small parallel data only. Not using the same information in the source side, as we discussed in case of mix-multi-source strategy, could explain the degrading in performance of such a system. But we believe that the magnitude and unbalancing of the corpus are the main reasons. The data contains nearly four millions sentences but only around twenty thousands of them (0.5%) are the genuine parallel data. As a quick attempt, after we get the model with that big data, we continue training on the real parallel corpus for some more epochs. When this adaptation is applied, our system brings an improvement of +1.52 BLEU on tst2013 and +1.06 BLEU on tst2014 (Table TABREF21 ).
Zero-resourced Translation
Among low-resourced scenarios, zero-resourced translation task stands in an extreme level. A zero-resourced translation task is one of the most difficult situation when there is no parallel data between the translating language pair. To the best of our knowledge, there have been yet existed a published work about using NMT for zero-resourced translation tasks up to now. In this section, we extend our strategies using the proposed multilingual NMT approach as first attempts to this extreme situation.
We employ language-specific coding and target forcing in a strategy called bridge. Unlike the strategies used in under-resourced translation task, bridge is an entire many-to-many multilingual NMT. Simulating a zero-resourced German INLINEFORM0 French translation task given the available German-English and English-French parallel corpora, after applying language-specific coding and target forcing for each corpus, we mix those data with an English-English data as a “bridge” creating some connection between German and French. We also propose a variant of this strategy that we incorporate French-French data. And we call it universal.
We evaluate bridge and universal systems on two German INLINEFORM0 French test sets. They are compared to a direct system, which is an NMT trained on German INLINEFORM1 French data, and to a pivot system, which essentially consists of two separate NMTs trained to translate from German to English and English to French. The direct system should not exist in a real zero-resourced situation. We refer it as the perfect system for comparison purpose only. In case of the pivot system, to generate a translated text in French from a German sentence, we first translate it to English, then the output sentence is fed to the English INLINEFORM2 German NMT system to obtain the French translation. Since there are more than two languages involved in those systems, we increase the number of BPE merging operations proportionally in order to reduce the number of rare words in such systems. We do not expect our proposed systems to perform well with this primitive way of building direct translating connections since this is essentially a difficult task. We report the performance of those systems in Table TABREF23 .
Unsupprisingly, both bridge and universal systems perform worse than the pivot one. We consider two possible reasons:
Our target forcing mechanism is moderately primitive. Since the process is applied after language-specific coding, the target forcing symbol is the same for all source sentences in every languages. Thus, the forcing strength might not be enough to guide the decision of the next words. Once the very first word is translated into a word in wrong language, the following words tend to be translated into that wrong language again. Table TABREF24 shows some statistics of the translated words and sentences in wrong language.
Balancing of the training corpus. Although it is not severe as in the case of mix-source system for large monolingual data, the limited number of sentences in target language can affect the training. The difference of 1.07 BLEU points between bridge and universal might explain this assumption as we added more target data (French) in universal strategy, thus reducing the unbalance in training.
Those issues would be addressed in our following future work toward the multilingual attention-based NMT.
Conclusion and Future Work
In this paper, we present our first attempts in building a multilingual Neural Machine Translation framework. By treating words in different languages as different words and force the attention and translation to the direction of desired target language, we are able to employ attention-enable NMT toward a multilingual translation system. Our proposed approach alleviates the need of complicated architecture re-designing when accommodating attention mechanism. In addition, the number of free parameters to learn in our network does not go beyond that magnitute of a single NMT system. With its universality, our approach has shown its effectiveness in an under-resourced translation task with considerable improvements. In addition, the approach has achieved interesting and promising results when applied in the translation task that there is no direct parallel corpus between source and target languages.
Nevertheless, there are issues that we can continue working on to address in future work. A more balancing data would be helpful for this framework. The mechanism of forcing the NMT system to the right target language could be improved. We could conduct more detailed analyses of the various strategies under the framework to show its universarity.
|
Do they test their framework performance on commonly used language pairs, such as English-to-German?
|
Yes
| 4,472
|
qasper
|
8k
|
Introduction
In practice, it is often difficult and costly to annotate sufficient training data for diverse application domains on-the-fly. We may have sufficient labeled data in an existing domain (called the source domain), but very few or no labeled data in a new domain (called the target domain). This issue has motivated research on cross-domain sentiment classification, where knowledge in the source domain is transferred to the target domain in order to alleviate the required labeling effort.
One key challenge of domain adaptation is that data in the source and target domains are drawn from different distributions. Thus, adaptation performance will decline with an increase in distribution difference. Specifically, in sentiment analysis, reviews of different products have different vocabulary. For instance, restaurants reviews would contain opinion words such as “tender”, “tasty”, or “undercooked” and movie reviews would contain “thrilling”, “horrific”, or “hilarious”. The intersection between these two sets of opinion words could be small which makes domain adaptation difficult.
Several techniques have been proposed for addressing the problem of domain shifting. The aim is to bridge the source and target domains by learning domain-invariant feature representations so that a classifier trained on a source domain can be adapted to another target domain. In cross-domain sentiment classification, many works BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 utilize a key intuition that domain-specific features could be aligned with the help of domain-invariant features (pivot features). For instance, “hilarious” and “tasty” could be aligned as both of them are relevant to “good”.
Despite their promising results, these works share two major limitations. First, they highly depend on the heuristic selection of pivot features, which may be sensitive to different applications. Thus the learned new representations may not effectively reduce the domain difference. Furthermore, these works only utilize the unlabeled target data for representation learning while the sentiment classifier was solely trained on the source domain. There have not been many studies on exploiting unlabeled target data for refining the classifier, even though it may contain beneficial information. How to effectively leverage unlabeled target data still remains an important challenge for domain adaptation.
In this work, we argue that the information from unlabeled target data is beneficial for domain adaptation and we propose a novel Domain Adaptive Semi-supervised learning framework (DAS) to better exploit it. Our main intuition is to treat the problem as a semi-supervised learning task by considering target instances as unlabeled data, assuming the domain distance can be effectively reduced through domain-invariant representation learning. Specifically, the proposed approach jointly performs feature adaptation and semi-supervised learning in a multi-task learning setting. For feature adaptation, it explicitly minimizes the distance between the encoded representations of the two domains. On this basis, two semi-supervised regularizations – entropy minimization and self-ensemble bootstrapping – are jointly employed to exploit unlabeled target data for classifier refinement.
We evaluate our method rigorously under multiple experimental settings by taking label distribution and corpus size into consideration. The results show that our model is able to obtain significant improvements over strong baselines. We also demonstrate through a series of analysis that the proposed method benefits greatly from incorporating unlabeled target data via semi-supervised learning, which is consistent with our motivation. Our datasets and source code can be obtained from https://github.com/ruidan/DAS.
Related Work
Domain Adaptation: The majority of feature adaptation methods for sentiment analysis rely on a key intuition that even though certain opinion words are completely distinct for each domain, they can be aligned if they have high correlation with some domain-invariant opinion words (pivot words) such as “excellent” or “terrible”. Blitzer et al. ( BIBREF0 ) proposed a method based on structural correspondence learning (SCL), which uses pivot feature prediction to induce a projected feature space that works well for both the source and the target domains. The pivot words are selected in a way to cover common domain-invariant opinion words. Subsequent research aims to better align the domain-specific words BIBREF1 , BIBREF5 , BIBREF3 such that the domain discrepancy could be reduced. More recently, Yu and Jiang ( BIBREF4 ) borrow the idea of pivot feature prediction from SCL and extend it to a neural network-based solution with auxiliary tasks. In their experiment, substantial improvement over SCL has been observed due to the use of real-valued word embeddings. Unsupervised representation learning with deep neural networks (DNN) such as denoising autoencoders has also been explored for feature adaptation BIBREF6 , BIBREF7 , BIBREF8 . It has been shown that DNNs could learn transferable representations that disentangle the underlying factors of variation behind data samples.
Although the aforementioned methods aim to reduce the domain discrepancy, they do not explicitly minimize the distance between distributions, and some of them highly rely on the selection of pivot features. In our method, we formally construct an objective for this purpose. Similar ideas have been explored in many computer vision problems, where the representations of the underlying domains are encouraged to be similar through explicit objectives BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 such as maximum mean discrepancy (MMD) BIBREF14 . In NLP tasks, Li et al. ( BIBREF15 ) and Chen et al. ( BIBREF16 ) both proposed using adversarial training framework for reducing domain difference. In their model, a sub-network is added as a domain discriminator while deep features are learned to confuse the discriminator. The feature adaptation component in our model shares similar intuition with MMD and adversary training. We will show a detailed comparison with them in our experiments.
Semi-supervised Learning: We attempt to treat domain adaptation as a semi-supervised learning task by considering the target instances as unlabeled data. Some efforts have been initiated on transfer learning from unlabeled data BIBREF17 , BIBREF18 , BIBREF19 . In our model, we reduce the domain discrepancy by feature adaptation, and thereafter adopt semi-supervised learning techniques to learn from unlabeled data. Primarily motivated by BIBREF20 and BIBREF21 , we employed entropy minimization and self-ensemble bootstrapping as regularizations to incorporate unlabeled data. Our experimental results show that both methods are effective when jointly trained with the feature adaptation objective, which confirms to our motivation.
Notations and Model Overview
We conduct most of our experiments under an unsupervised domain adaptation setting, where we have no labeled data from the target domain. Consider two sets INLINEFORM0 and INLINEFORM1 . INLINEFORM2 is from the source domain with INLINEFORM3 labeled examples, where INLINEFORM4 is a one-hot vector representation of sentiment label and INLINEFORM5 denotes the number of classes. INLINEFORM6 is from the target domain with INLINEFORM7 unlabeled examples. INLINEFORM8 denotes the total number of training documents including both labeled and unlabeled. We aim to learn a sentiment classifier from INLINEFORM13 and INLINEFORM14 such that the classifier would work well on the target domain. We also present some results under a setting where we assume that a small number of labeled target examples are available (see Figure FIGREF27 ).
For the proposed model, we denote INLINEFORM0 parameterized by INLINEFORM1 as a neural-based feature encoder that maps documents from both domains to a shared feature space, and INLINEFORM2 parameterized by INLINEFORM3 as a fully connected layer with softmax activation serving as the sentiment classifier. We aim to learn feature representations that are domain-invariant and at the same time discriminative on both domains, thus we simultaneously consider three factors in our objective: (1) minimize the classification error on the labeled source examples; (2) minimize the domain discrepancy; and (3) leverage unlabeled data via semi-supervised learning.
Suppose we already have the encoded features of documents INLINEFORM0 (see Section SECREF10 ), the objective function for purpose (1) is thus the cross entropy loss on the labeled source examples DISPLAYFORM0
where INLINEFORM0 denotes the predicted label distribution. In the following subsections, we will explain how to perform feature adaptation and domain adaptive semi-supervised learning in details for purpose (2) and (3) respectively.
Feature Adaptation
Unlike prior works BIBREF0 , BIBREF4 , our method does not attempt to align domain-specific words through pivot words. In our preliminary experiments, we found that word embeddings pre-trained on a large corpus are able to adequately capture this information. As we will later show in our experiments, even without adaptation, a naive neural network classifier with pre-trained word embeddings can already achieve reasonably good results.
We attempt to explicitly minimize the distance between the source and target feature representations ( INLINEFORM0 and INLINEFORM1 ). A few methods from literature can be applied such as Maximum Mean Discrepancy (MMD) BIBREF14 or adversary training BIBREF15 , BIBREF16 . The main idea of MMD is to estimate the distance between two distributions as the distance between sample means of the projected embeddings in Hilbert space. MMD is implicitly computed through a characteristic kernel, which is used to ensure that the sample mean is injective, leading to the MMD being zero if and only if the distributions are identical. In our implementation, we skip the mapping procedure induced by a characteristic kernel for simplifying the computation and learning. We simply estimate the distribution distance as the distance between the sample means in the current embedding space. Although this approximation cannot preserve all statistical features of the underlying distributions, we find it performs comparably to MMD on our problem. The following equations formally describe the feature adaptation loss INLINEFORM2 : DISPLAYFORM0
INLINEFORM0 normalization is applied on the mean representations INLINEFORM1 and INLINEFORM2 , rescaling the vectors such that all entries sum to 1. We adopt a symmetric version of KL divergence BIBREF12 as the distance function. Given two distribution vectors INLINEFORM3 , INLINEFORM4 .
Domain Adaptive Semi-supervised Learning (DAS)
We attempt to exploit the information in target data through semi-supervised learning objectives, which are jointly trained with INLINEFORM0 and INLINEFORM1 . Normally, to incorporate target data, we can minimize the cross entropy loss between the true label distributions INLINEFORM2 and the predicted label distributions INLINEFORM3 over target samples. The challenge here is that INLINEFORM4 is unknown, and thus we attempt to estimate it via semi-supervised learning. We use entropy minimization and bootstrapping for this purpose. We will later show in our experiments that both methods are effective, and jointly employing them overall yields the best results.
Entropy Minimization: In this method, INLINEFORM0 is estimated as the predicted label distribution INLINEFORM1 , which is a function of INLINEFORM2 and INLINEFORM3 . The loss can thus be written as DISPLAYFORM0
Assume the domain discrepancy can be effectively reduced through feature adaptation, by minimizing the entropy penalty, training of the classifier is influenced by the unlabeled target data and will generally maximize the margins between the target examples and the decision boundaries, increasing the prediction confidence on the target domain.
Self-ensemble Bootstrapping: Another way to estimate INLINEFORM0 corresponds to bootstrapping. The idea is to estimate the unknown labels as the predictions of the model learned from the previous round of training. Bootstrapping has been explored for domain adaptation in previous works BIBREF18 , BIBREF19 . However, in their methods, domain discrepancy was not explicitly minimized via feature adaptation. Applying bootstrapping or other semi-supervised learning techniques in this case may worsen the results as the classifier can perform quite bad on the target data.
[t] Pseudocode for training DAS INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 INLINEFORM4 = ensembling momentum, INLINEFORM5 INLINEFORM6 = weight ramp-up function INLINEFORM7 INLINEFORM8 INLINEFORM9 each minibatch INLINEFORM10 , INLINEFORM11 , INLINEFORM12 in
INLINEFORM0 , INLINEFORM1 , INLINEFORM2 compute loss INLINEFORM3 on INLINEFORM4 compute loss INLINEFORM5 on INLINEFORM6 compute loss INLINEFORM7 on INLINEFORM8 compute loss INLINEFORM9 on INLINEFORM10 INLINEFORM11
update network parameters INLINEFORM0 , for INLINEFORM1 INLINEFORM2 INLINEFORM3
Inspired by the ensembling method proposed in BIBREF21 , we estimate INLINEFORM0 by forming ensemble predictions of labels during training, using the outputs on different training epochs. The loss is formulated as follows: DISPLAYFORM0
where INLINEFORM0 denotes the estimated labels computed on the ensemble predictions from different epochs. The loss is applied on all documents. It serves for bootstrapping on the unlabeled target data, and it also serves as a regularization that encourages the network predictions to be consistent in different training epochs. INLINEFORM1 is jointly trained with INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 . Algorithm SECREF6 illustrates the overall training process of the proposed domain adaptive semi-supervised learning (DAS) framework.
In Algorithm SECREF6 , INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are weights to balance the effects of INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 respectively. INLINEFORM6 and INLINEFORM7 are constant hyper-parameters. We set INLINEFORM8 as a Gaussian curve to ramp up the weight from 0 to INLINEFORM9 . This is to ensure the ramp-up of the bootstrapping loss component is slow enough in the beginning of the training. After each training epoch, we compute INLINEFORM10 which denotes the predictions made by the network in current epoch, and then the ensemble prediction INLINEFORM11 is updated as a weighted average of the outputs from previous epochs and the current epoch, with recent epochs having larger weight. For generating estimated labels INLINEFORM12 , INLINEFORM13 is converted to a one-hot vector where the entry with the maximum value is set to one and other entries are set to zeros. The self-ensemble bootstrapping is a generalized version of bootstrappings that only use the outputs from the previous round of training BIBREF18 , BIBREF19 . The ensemble prediction is likely to be closer to the correct, unknown labels of the target data.
CNN Encoder Implementation
We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks. Given a review document INLINEFORM1 consisting of INLINEFORM2 words, we begin by associating each word with a continuous word embedding BIBREF23 INLINEFORM3 from an embedding matrix INLINEFORM4 , where INLINEFORM5 is the vocabulary size and INLINEFORM6 is the embedding dimension. INLINEFORM7 is jointly updated with other network parameters during training. Given a window of dense word embeddings INLINEFORM8 , the convolution layer first concatenates these vectors to form a vector INLINEFORM9 of length INLINEFORM10 and then the output vector is computed by Equation ( EQREF11 ): DISPLAYFORM0
INLINEFORM0 , INLINEFORM1 is the parameter set of the encoder INLINEFORM2 and is shared across all windows of the sequence. INLINEFORM3 is an element-wise non-linear activation function. The convolution operation can capture local contextual dependencies of the input sequence and the extracted feature vectors are similar to INLINEFORM4 -grams. After the convolution operation is applied to the whole sequence, we obtain a list of hidden vectors INLINEFORM5 . A max-over-time pooling layer is applied to obtain the final vector representation INLINEFORM6 of the input document.
Datasets and Experimental Settings
Existing benchmark datasets such as the Amazon benchmark BIBREF0 typically remove reviews with neutral labels in both domains. This is problematic as the label information of the target domain is not accessible in an unsupervised domain adaptation setting. Furthermore, removing neutral instances may bias the dataset favorably for max-margin-based algorithms like ours, since the resulting dataset has all uncertain labels removed, leaving only high confidence examples. Therefore, we construct new datasets by ourselves. The results on the original Amazon benchmark is qualitatively similar, and we present them in Appendix SECREF6 for completeness since most of previous works reported results on it.
Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .
In all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain. Since we cannot control the label distribution of unlabeled data during training, we consider two different settings:
Setting (1): Only set 1 of the target domain is used as the unlabeled set. This tells us how the method performs in a condition when the target domain has a close-to-balanced label distribution. As we also evaluate on set 1 of the target domain, this is also considered as a transductive setting.
Setting (2): Set 2 from both the source and target domains are used as unlabeled sets. Since set 2 is directly sampled from millions of reviews, it better reflects real-life sentiment distribution.
Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 . We keep all reviews in the original datasets and consider a transductive setting where all target examples are used for both training (without label information) and evaluation. We perform sampling to balance the classes of labeled source data in each minibatch INLINEFORM3 during training.
Selection of Development Set
Ideally, the development set should be drawn from the same distribution as the test set. However, under the unsupervised domain adaptation setting, we do not have any labeled target data at training phase which could be used as development set. In all of our experiments, for each pair of domains, we instead sample 1000 examples from the training set of the source domain as development set. We train the network for a fixed number of epochs, and the model with the minimum classification error on this development set is saved for evaluation. This approach works well on most of the problems since the target domain is supposed to behave like the source domain if the domain difference is effectively reduced.
Another problem is how to select the values for hyper-parameters. If we tune INLINEFORM0 and INLINEFORM1 directly on the development set from the source domain, most likely both of them will be set to 0, as unlabeled target data is not helpful for improving in-domain accuracy of the source domain. Other neural network models also have the same problem for hyper-parameter tuning. Therefore, our strategy is to use the development set from the target domain to optimize INLINEFORM2 and INLINEFORM3 for one problem (e.g., we only do this on E INLINEFORM4 BK), and fix their values on the other problems. This setting assumes that we have at least two labeled domains such that we can optimize the hyper-parameters, and then we fix them for other new unlabeled domains to transfer to.
Training Details and Hyper-parameters
We initialize word embeddings using the 300-dimension GloVe vectors supplied by Pennington et al., ( BIBREF28 ), which were trained on 840 billion tokens from the Common Crawl. For each pair of domains, the vocabulary consists of the top 10000 most frequent words. For words in the vocabulary but not present in the pre-trained embeddings, we randomly initialize them.
We set hyper-parameters of the CNN encoder following previous works BIBREF22 , BIBREF4 without specific tuning on our datasets. The window size is set to 3 and the size of the hidden layer is set to 300. The nonlinear activation function is Relu. For regularization, we also follow their settings and employ dropout with probability set to 0.5 on INLINEFORM0 before feeding it to the output layer INLINEFORM1 , and constrain the INLINEFORM2 -norm of the weight vector INLINEFORM3 , setting its max norm to 3.
On the small-scale datasets and the Aamzon benchmark, INLINEFORM0 and INLINEFORM1 are set to 200 and 1, respectively, tuned on the development set of task E INLINEFORM2 BK under setting 1. On the large-scale datasets, INLINEFORM3 and INLINEFORM4 are set to 500 and 0.2, respectively, tuned on I INLINEFORM5 Y. We use a Gaussian curve INLINEFORM6 to ramp up the weight of the bootstrapping loss INLINEFORM7 from 0 to INLINEFORM8 , where INLINEFORM9 denotes the maximum number of training epochs. We train 30 epochs for all experiments. We set INLINEFORM10 to 3 and INLINEFORM11 to 0.5 for all experiments.
The batch size is set to 50 on the small-scale datasets and the Amazon benchmark. We increase the batch size to 250 on the large-scale datasets to reduce the number of iterations. RMSProp optimizer with learning rate set to 0.0005 is used for all experiments.
Models for Comparison
We compare with the following baselines:
(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.
(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.
(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.
(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.
(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.
(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel.
In addition to the above baselines, we also show results of different variants of our model. DAS as shown in Algorithm SECREF6 denotes our full model. DAS-EM denotes the model with only entropy minimization for semi-supervised learning (set INLINEFORM0 ). DAS-SE denotes the model with only self-ensemble bootstrapping for semi-supervised learning (set INLINEFORM1 ). FANN (feature-adaptation neural network) denotes the model without semi-supervised learning performed (set both INLINEFORM2 and INLINEFORM3 to zeros).
Main Results
Figure FIGREF17 shows the comparison of adaptation results (see Appendix SECREF7 for the exact numerical numbers). We report classification accuracy on the small-scale dataset. For the large-scale dataset, macro-F1 is instead used since the label distribution in the test set is extremely unbalanced. Key observations are summarized as follows. (1) Both DAS-EM and DAS-SE perform better in most cases compared with ADAN, MDD, and FANN, in which only feature adaptation is performed. This demonstrates the effectiveness of the proposed domain adaptive semi-supervised learning framework. DAS-EM is more effective than DAS-SE in most cases, and the full model DAS with both techniques jointly employed overall has the best performance. (2) When comparing the two settings on the small-scale dataset, all domain-adaptive methods generally perform better under setting 1. In setting 1, the target examples are balanced in classes, which can provide more diverse opinion-related features. However, when considering unsupervised domain adaptation, we should not presume the label distribution of the unlabeled data. Thus, it is necessary to conduct experiments using datasets that reflect real-life sentiment distribution as what we did on setting2 and the large-scale dataset. Unfortunately, this is ignored by most of previous works. (3) Word-embeddings are very helpful, as we can see even NaiveNN can substantially outperform mSDA on most tasks.
To see the effect of semi-supervised learning alone, we also conduct experiments by setting INLINEFORM0 to eliminate the effect of feature adaptation. Both entropy minimization and bootstrapping perform very badly in this setting. Entropy minimization gives almost random predictions with accuracy below 0.4, and the results of bootstrapping are also much lower compared to NaiveNN. This suggests that the feature adaptation component is essential. Without it, the learned target representations are less meaningful and discriminative. Applying semi-supervised learning in this case is likely to worsen the results.
Further Analysis
In Figure FIGREF23 , we show the change of accuracy with respect to the percentage of unlabeled data used for training on three particular problems under setting 1. The value at INLINEFORM0 denotes the accuracies of NaiveNN which does not utilize any target data. For DAS, we observe a nonlinear increasing trend where the accuracy quickly improves at the beginning, and then gradually stabilizes. For other methods, this trend is less obvious, and adding more unlabeled data sometimes even worsen the results. This finding again suggests that the proposed approach can better exploit the information from unlabeled data.
We also conduct experiments under a setting with a small number of labeled target examples available. Figure FIGREF27 shows the change of accuracy with respect to the number of labeled target examples added for training. We can observe that DAS is still more effective under this setting, while the performance differences to other methods gradually decrease with the increasing number of labeled target examples.
CNN Filter Analysis
In this subsection, we aim to better understand DAS by analyzing sentiment-related CNN filters. To do that, 1) we first select a list of the most related CNN filters for predicting each sentiment label (positive, negative neutral). Those filters can be identified according to the learned weights INLINEFORM0 of the output layer INLINEFORM1 . Higher weight indicates stronger relatedness. 2) Recall that in our implementation, each CNN filter has a window size of 3 with Relu activation. We can thus represent each selected filter as a ranked list of trigrams with highest activation values.
We analyze the CNN filters learned by NaiveNN, FANN and DAS respectively on task E INLINEFORM0 BT under setting 1. We focus on E INLINEFORM1 BT for study because electronics and beauty are very different domains and each of them has a diverse set of domain-specific sentiment expressions. For each method, we identify the top 10 most related filters for each sentiment label, and extract the top trigrams of each selected filter on both source and target domains. Since labeled source examples are used for training, we find the filters learned by the three methods capture similar expressions on the source domain, containing both domain-invariant and domain-specific trigrams. On the target domain, DAS captures more target-specific expressions compared to the other two methods. Due to space limitation, we only present a small subset of positive-sentiment-related filters in Table TABREF34 . The complete results are provided in Appendix SECREF8 . From Table TABREF34 , we can observe that the filters learned by NaiveNN are almost unable to capture target-specific sentiment expressions, while FANN is able to capture limited target-specific words such as “clean” and “scent”. The filters learned by DAS are more domain-adaptive, capturing diverse sentiment expressions in the target domain.
Conclusion
In this work, we propose DAS, a novel framework that jointly performs feature adaptation and semi-supervised learning. We have demonstrated through multiple experiments that DAS can better leverage unlabeled data, and achieve substantial improvements over baseline methods. We have also shown that feature adaptation is an essential component, without which, semi-supervised learning is not able to function properly. The proposed framework could be potentially adapted to other domain adaptation tasks, which is the focus of our future studies.
Results on Amazon Benchmark
Most previous works BIBREF0 , BIBREF1 , BIBREF6 , BIBREF7 , BIBREF29 carried out experiments on the Amazon benchmark released by Blitzer et al. ( BIBREF0 ). The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). Following their experimental settings, we consider the binary classification task to predict whether a review is positive or negative on the target domain. Each domain consists of 1000 positive and 1000 negative reviews respectively. We also allow 4000 unlabeled reviews to be used for both the source and the target domains, of which the positive and negative reviews are balanced as well, following the settings in previous works. We construct 12 cross-domain sentiment classification tasks and split the labeled data in each domain into a training set of 1600 reviews and a test set of 400 reviews. The classifier is trained on the training set of the source domain and is evaluated on the test set of the target domain. The comparison results are shown in Table TABREF37 .
Numerical Results of Figure
Due to space limitation, we only show results in figures in the paper. All numerical numbers used for plotting Figure FIGREF17 are presented in Table TABREF38 . We can observe that DAS-EM, DAS-SE, and DAS all achieve substantial improvements over baseline methods under different settings.
CNN Filter Analysis Full Results
As mentioned in Section SECREF36 , we conduct CNN filter analysis on NaiveNN, FANN, and DAS. For each method, we identify the top 10 most related filters for positive, negative, neutral sentiment labels respectively, and then represent each selected filter as a ranked list of trigrams with the highest activation values on it. Table TABREF39 , TABREF40 , TABREF41 in the following pages illustrate the trigrams from the target domain (beauty) captured by the selected filters learned on E INLINEFORM0 BT for each method.
We can observe that compared to NaiveNN and FANN, DAS is able to capture a more diverse set of relevant sentiment expressions on the target domain for each sentiment label. This observation is consistent with our motivation. Since NaiveNN, FANN and other baseline methods solely train the sentiment classifier on the source domain, the learned encoder is not able to produce discriminative features on the target domain. DAS addresses this problem by refining the classifier on the target domain with semi-supervised learning, and the overall objective forces the encoder to learn feature representations that are not only domain-invariant but also discriminative on both domains.
|
What are the source and target domains?
|
Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen
| 5,061
|
qasper
|
8k
|
Introduction
Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc., from a corresponding table of facts about the entity. Such a table typically contains {field, value} pairs where the field is a property of the entity (e.g., color) and the value is a set of possible assignments to this property (e.g., color = red). Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox BIBREF0 . The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person. Figure FIGREF2 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater. Each field further contains some words (e.g., particle physics, many-body theory, etc.). The corresponding description is coherent with the information contained in the infobox.
Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person. Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc.), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models. Based on the recent success of data-driven neural models for various other NLG tasks BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one simple choice is to treat the infobox as a sequence of {field, value} pairs and use a standard seq2seq model for this task. However, such a model is too generic and does not exploit the specific characteristics of this task as explained below. First, note that while generating such descriptions from structured data, a human keeps track of information at two levels. Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next. For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations (actor, director, producer, etc.). To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Finally, we feed a fused context vector to the decoder which contains both field level and word level information. Note that such two-level attention mechanisms BIBREF6 , BIBREF7 , BIBREF8 have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences.
Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output). We refer to this as the stay on behavior. Further, we note that once the tokens of a field are referred to, they are usually not referred to later. For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it. We refer to this as the never look back behavior. To model the stay on behaviour, we introduce a forget (or remember) gate which acts as a signal to decide when to forget the current field (or equivalently to decide till when to remember the current field). To model the never look back behaviour we introduce a gated orthogonalization mechanism which ensures that once a field is forgotten, subsequent field context vectors fed to the decoder are orthogonal to (or different from) the previous field context vectors.
We experiment with the WikiBio dataset BIBREF0 which contains around 700K {infobox, description} pairs and has a vocabulary of around 400K words. We show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models BIBREF0 , BIBREF9 on this dataset. The proposed model also gives a relative improvement of 10% as compared to the basic seq2seq model. Further, we introduce new datasets for French and German on the same lines as the English WikiBio dataset. Even on these two datasets, our model outperforms the state of the art methods mentioned above.
Related work
Natural Language Generation has always been of interest to the research community and has received a lot of attention in the past. The approaches for NLG range from (i) rule based approaches (e.g., BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ) (ii) modular statistical approaches which divide the process into three phases (planning, selection and surface realization) and use data driven approaches for one or more of these phases BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 (iii) hybrid approaches which rely on a combination of handcrafted rules and corpus statistics BIBREF20 , BIBREF21 , BIBREF22 and (iv) the more recent neural network based models BIBREF1 .
Neural models for NLG have been proposed in the context of various tasks such as machine translation BIBREF1 , document summarization BIBREF2 , BIBREF4 , paraphrase generation BIBREF23 , image captioning BIBREF24 , video summarization BIBREF25 , query based document summarization BIBREF5 and so on. Most of these models are data hungry and are trained on large amounts of data. On the other hand, NLG from structured data has largely been studied in the context of small datasets such as WeatherGov BIBREF26 , RoboCup BIBREF27 , NFL Recaps BIBREF15 , Prodigy-Meteo BIBREF28 and TUNA Challenge BIBREF29 . Recently weather16 proposed RNN/LSTM based neural encoder-decoder models with attention for WeatherGov and RoboCup datasets.
Unlike the datasets mentioned above, the biography dataset introduced by lebret2016neural is larger (700K {table, descriptions} pairs) and has a much larger vocabulary (400K words as opposed to around 350 or fewer words in the above datasets). Further, unlike the feed-forward neural network based model proposed by BIBREF0 we use a sequence to sequence model and introduce components to address the peculiar characteristics of the task. Specifically, we introduce neural components to address the need for attention at two levels and to address the stay on and never look back behaviour required by the decoder. KiddonZC16 have explored the use of checklists to track previously visited ingredients while generating recipes from ingredients. Note that two-level attention mechanisms have also been used in the context of summarization BIBREF6 , document classification BIBREF7 , dialog systems BIBREF8 , etc. However, these works deal with unstructured data (sentences at the higher level and words at a lower level) as opposed to structured data in our case.
Proposed model
As input we are given an infobox INLINEFORM0 , which is a set of pairs INLINEFORM1 where INLINEFORM2 corresponds to field names and INLINEFORM3 is the sequence of corresponding values and INLINEFORM4 is the total number of fields in INLINEFORM5 . For example, INLINEFORM6 could be one such pair in this set. Given such an input, the task is to generate a description INLINEFORM7 containing INLINEFORM8 words. A simple solution is to treat the infobox as a sequence of fields followed by the values corresponding to the field in the order of their appearance in the infobox. For example, the infobox could be flattened to produce the following input sequence (the words in bold are field names which act as delimiters)
[Name] John Doe [Birth_Date] 19 March 1981 [Nationality] Indian .....
The problem can then be cast as a seq2seq generation problem and can be modeled using a standard neural architecture comprising of three components (i) an input encoder (using GRU/LSTM cells), (ii) an attention mechanism to attend to important values in the input sequence at each time step and (iii) a decoder to decode the output one word at a time (again, using GRU/LSTM cells). However, this standard model is too generic and does not exploit the specific characteristics of this task. We propose additional components, viz., (i) a fused bifocal attention mechanism which operates on fields (macro) and values (micro) and (ii) a gated orthogonalization mechanism to model stay on and never look back behavior.
Fused Bifocal Attention Mechanism
Intuitively, when a human writes a description from a table she keeps track of information at two levels. At the macro level, it is important to decide which is the appropriate field to attend to next and at a micro level (i.e., within a field) it is important to know which values to attend to next. To capture this behavior, we use a bifocal attention mechanism as described below.
Macro Attention: Consider the INLINEFORM0 -th field INLINEFORM1 which has values INLINEFORM2 . Let INLINEFORM3 be the representation of this field in the infobox. This representation can either be (i) the word embedding of the field name or (ii) some function INLINEFORM4 of the values in the field or (iii) a concatenation of (i) and (ii). The function INLINEFORM5 could simply be the sum or average of the embeddings of the values in the field. Alternately, this function could be a GRU (or LSTM) which treats these values within a field as a sequence and computes the field representation as the final representation of this sequence (i.e., the representation of the last time-step). We found that bidirectional GRU is a better choice for INLINEFORM6 and concatenating the embedding of the field name with this GRU representation works best. Further, using a bidirectional GRU cell to take contextual information from neighboring fields also helps (these are the orange colored cells in the top-left block in Figure FIGREF3 with macro attention). Given these representations INLINEFORM7 for all the INLINEFORM8 fields we compute an attention over the fields (macro level). DISPLAYFORM0
where INLINEFORM0 is the state of the decoder at time step INLINEFORM1 . INLINEFORM2 and INLINEFORM3 are parameters, INLINEFORM4 is the total number of fields in the input, INLINEFORM5 is the macro (field level) context vector at the INLINEFORM6 -th time step of the decoder.
Micro Attention: Let INLINEFORM0 be the representation of the INLINEFORM1 -th value in a given field. This representation could again either be (i) simply the embedding of this value (ii) or a contextual representation computed using a function INLINEFORM2 which also considers the other values in the field. For example, if INLINEFORM3 are the values in a field then these values can be treated as a sequence and the representation of the INLINEFORM4 -th value can be computed using a bidirectional GRU over this sequence. Once again, we found that using a bi-GRU works better then simply using the embedding of the value. Once we have such a representation computed for all values across all the fields, we compute the attention over these values (micro level) as shown below : DISPLAYFORM0
where INLINEFORM0 is the state of the decoder at time step INLINEFORM1 . INLINEFORM2 and INLINEFORM3 are parameters, INLINEFORM4 is the total number of values across all the fields.
Fused Attention: Intuitively, the attention weights assigned to a field should have an influence on all the values belonging to the particular field. To ensure this, we reweigh the micro level attention weights based on the corresponding macro level attention weights. In other words, we fuse the attention weights at the two levels as: DISPLAYFORM0
where INLINEFORM0 is the field corresponding to the INLINEFORM1 -th value, INLINEFORM2 is the macro level context vector.
Gated Orthogonalization for Modeling Stay-On and Never Look Back behaviour
We now describe a series of choices made to model stay-on and never look back behavior. We first begin with the stay-on property which essentially implies that if we have paid attention to the field INLINEFORM0 at timestep INLINEFORM1 then we are likely to pay attention to the same field for a few more time steps. For example, if we are focusing on the occupation field at this timestep then we are likely to focus on it for the next few timesteps till all relevant values in this field have been included in the generated description. In other words, we want to remember the field context vector INLINEFORM2 for a few timesteps. One way of ensuring this is to use a remember (or forget) gate as given below which remembers the previous context vector when required and forgets it when it is time to move on from that field. DISPLAYFORM0
where INLINEFORM0 are parameters to be learned. The job of the forget gate is to ensure that INLINEFORM1 is similar to INLINEFORM2 when required (i.e., by learning INLINEFORM3 when we want to continue focusing on the same field) and different when it is time to move on (by learning that INLINEFORM4 ).
Next, the never look back property implies that once we have moved away from a field we are unlikely to pay attention to it again. For example, once we have rendered all the occupations in the generated description there is no need to return back to the occupation field. In other words, once we have moved on ( INLINEFORM0 ), we want the successive field context vectors INLINEFORM1 to be very different from the previous field vectors INLINEFORM2 . One way of ensuring this is to orthogonalize successive field vectors using DISPLAYFORM0
where INLINEFORM0 is the dot product between vectors INLINEFORM1 and INLINEFORM2 . The above equation essentially subtracts the component of INLINEFORM3 along INLINEFORM4 . INLINEFORM5 is a learned parameter which controls the degree of orthogonalization thereby allowing a soft orthogonalization (i.e., the entire component along INLINEFORM6 is not subtracted but only a fraction of it). The above equation only ensures that INLINEFORM7 is soft-orthogonal to INLINEFORM8 . Alternately, we could pass the sequence of context vectors, INLINEFORM9 generated so far through a GRU cell. The state of this GRU cell at each time step would thus be aware of the history of the field vectors till that timestep. Now instead of orthogonalizing INLINEFORM10 to INLINEFORM11 we could orthogonalize INLINEFORM12 to the hidden state of this GRU at time-step INLINEFORM13 . In practice, we found this to work better as it accounts for all the field vectors in the history instead of only the previous field vector.
In summary, Equation provides a mechanism for remembering the current field vector when appropriate (thus capturing stay-on behavior) using a remember gate. On the other hand, Equation EQREF10 explicitly ensures that the field vector is very different (soft-orthogonal) from the previous field vectors once it is time to move on (thus capturing never look back behavior). The value of INLINEFORM0 computed in Equation EQREF10 is then used in Equation . The INLINEFORM1 (macro) thus obtained is then concatenated with INLINEFORM2 (micro) and fed to the decoder (see Fig. FIGREF3 )
Experimental setup
We now describe our experimental setup:
Datasets
We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia. A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc.). Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person). We used the same train, valid and test sets which were made publicly available by lebret2016neural.
We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article. As with the English dataset, we split the French and German datasets randomly into train (80%), test (10%) and valid (10%). The French and German datasets extracted by us has been made publicly available. The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively. Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages.
Models compared
We compare with the following models:
1. BIBREF0 : This is a conditional language model which uses a feed-forward neural network to predict the next word in the description conditioned on local characteristics (i.e., words within a field) and global characteristics (i.e., overall structure of the infobox).
2. BIBREF9 : This model was proposed in the context of the WeatherGov and RoboCup datasets which have a much smaller vocabulary. They use an improved attention model with additional regularizer terms which influence the weights assigned to the fields.
3. Basic Seq2Seq: This is the vanilla encode-attend-decode model BIBREF1 . Further, to deal with the large vocabulary ( INLINEFORM0 400K words) we use a copying mechanism as a post-processing step. Specifically, we identify the time steps at which the decoder produces unknown words (denoted by the special symbol UNK). For each such time step, we look at the attention weights on the input words and replace the UNK word by that input word which has received maximum attention at this timestep. This process is similar to the one described in BIBREF30 . Even lebret2016neural have a copying mechanism tightly integrated with their model.
Hyperparameter tuning
We tuned the hyperparameters of all the models using a validation set. As mentioned earlier, we used a bidirectional GRU cell as the function INLINEFORM0 for computing the representation of the fields and the values (see Section SECREF4 ). For all the models, we experimented with GRU state sizes of 128, 256 and 512. The total number of unique words in the corpus is around 400K (this includes the words in the infobox and the descriptions). Of these, we retained only the top 20K words in our vocabulary (same as BIBREF0 ). We initialized the embeddings of these words with 300 dimensional Glove embeddings BIBREF31 . We used Adam BIBREF32 with a learning rate of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . We trained the model for a maximum of 20 epochs and used early stopping with the patience set to 5 epochs.
Results and Discussions
We now discuss the results of our experiments.
Comparison of different models
Following lebret2016neural, we used BLEU-4, NIST-4 and ROUGE-4 as the evaluation metrics. We first make a few observations based on the results on the English dataset (Table TABREF15 ). The basic seq2seq model, as well as the model proposed by weather16, perform better than the model proposed by lebret2016neural. Our final model with bifocal attention and gated orthogonalization gives the best performance and does 10% (relative) better than the closest baseline (basic seq2seq) and 21% (relative) better than the current state of the art method BIBREF0 . In Table TABREF16 , we show some qualitative examples of the output generated by different models.
Human Evaluations
To make a qualitative assessment of the generated sentences, we conducted a human study on a sample of 500 Infoboxes which were sampled from English dataset. The annotators for this task were undergraduate and graduate students. For each of these infoboxes, we generated summaries using the basic seq2seq model and our final model with bifocal attention and gated orthogonalization. For each description and for each model, we asked three annotators to rank the output of the systems based on i) adequacy (i.e. does it capture relevant information from the infobox), (ii) fluency (i.e. grammar) and (iii) relative preference (i.e., which of the two outputs would be preferred). Overall the average fluency/adequacy (on a scale of 5) for basic seq2seq model was INLINEFORM0 and INLINEFORM1 for our model respectively.
The results from Table TABREF17 suggest that in general gated orthogonalization model performs better than the basic seq2seq model. Additionally, annotators were asked to verify if the generated summaries look natural (i.e, as if they were generated by humans). In 423 out of 500 cases, the annotators said “Yes” suggesting that gated orthogonalization model indeed produces good descriptions.
Performance on different languages
The results on the French and German datasets are summarized in Tables TABREF20 and TABREF20 respectively. Note that the code of BIBREF0 is not publicly available, hence we could not report numbers for French and German using their model. We observe that our final model gives the best performance - though the bifocal attention model performs poorly as compared to the basic seq2seq model on French. However, the overall performance for French and German are much smaller than those for English. There could be multiple reasons for this. First, the amount of training data in these two languages is smaller than that in English. Specifically, the amount of training data available in French (German) is only INLINEFORM0 ( INLINEFORM1 )% of that available for English. Second, on average the descriptions in French and German are longer than that in English (EN: INLINEFORM2 words, FR: INLINEFORM3 words and DE: INLINEFORM4 words). Finally, a manual inspection across the three languages suggests that the English descriptions have a more consistent structure than the French descriptions. For example, most English descriptions start with name followed by date of birth but this is not the case in French. However, this is only a qualitative observation and it is hard to quantify this characteristic of the French and German datasets.
Visualizing Attention Weights
If the proposed model indeed works well then we should see attention weights that are consistent with the stay on and never look back behavior. To verify this, we plotted the attention weights in cases where the model with gated orthogonalization does better than the model with only bifocal attention. Figure FIGREF21 shows the attention weights corresponding to infobox in Figure FIGREF25 . Notice that the model without gated orthogonalization has attention on both name field and article title while rendering the name. The model with gated orthogonalization, on the other hand, stays on the name field for as long as it is required but then moves and never returns to it (as expected).
Due to lack of space, we do not show similar plots for French and German but we would like to mention that, in general, the differences between the attention weights learned by the model with and without gated orthogonalization were more pronounced for the French/German dataset than the English dataset. This is in agreement with the results reported in Table TABREF20 and TABREF20 where the improvements given by gated orthogonalization are more for French/German than for English.
Out of domain results
What if the model sees a different INLINEFORM0 of person at test time? For example, what if the training data does not contain any sportspersons but at test time we encounter the infobox of a sportsperson. This is the same as seeing out-of-domain data at test time. Such a situation is quite expected in the products domain where new products with new features (fields) get frequently added to the catalog. We were interested in three questions here. First, we wanted to see if testing the model on out-of-domain data indeed leads to a drop in the performance. For this, we compared the performance of our best model in two scenarios (i) trained on data from all domains (including the target domain) and tested on the target domain (sports, arts) and (ii) trained on data from all domains except the target domain and tested on the target domain. Comparing rows 1 and 2 of Table TABREF32 we observed a significant drop in the performance. Note that the numbers for sports domain in row 1 are much better than the Arts domain because roughly 40% of the WikiBio training data contains sportspersons.
Next, we wanted to see if we can use a small amount of data from the target domain to fine tune a model trained on the out of domain data. We observe that even with very small amounts of target domain data the performance starts improving significantly (see rows 3 and 4 of Table TABREF32 ). Note that if we train a model from scratch with only limited data from the target domain instead of fine-tuning a model trained on a different source domain then the performance is very poor. In particular, training a model from scratch with 10K training instances we get a BLEU score of INLINEFORM0 and INLINEFORM1 for arts and sports respectively. Finally, even though the actual words used for describing a sportsperson (footballer, cricketer, etc.) would be very different from the words used to describe an artist (actor, musician, etc.) they might share many fields (for example, date of birth, occupation, etc.). As seen in Figure FIGREF28 (attention weights corresponding to the infobox in Figure FIGREF27 ), the model predicts the attention weights correctly for common fields (such as occupation) but it is unable to use the right vocabulary to describe the occupation (since it has not seen such words frequently in the training data). However, once we fine tune the model with limited data from the target domain we see that it picks up the new vocabulary and produces a correct description of the occupation.
Conclusion
We present a model for generating natural language descriptions from structured data. To address specific characteristics of the problem we propose neural components for fused bifocal attention and gated orthogonalization to address stay on and never look back behavior while decoding. Our final model outperforms an existing state of the art model on a large scale WikiBio dataset by 21%. We also introduce datasets for French and German and demonstrate that our model gives state of the art results on these datasets. Finally, we perform experiments with an out-of-domain model and show that if such a model is fine-tuned with small amounts of in domain data then it can give an improved performance on the target domain.
Given the multilingual nature of the new datasets, as future work, we would like to build models which can jointly learn to generate natural language descriptions from structured data in multiple languages. One idea is to replace the concepts in the input infobox by Wikidata concept ids which are language agnostic. A large amount of input vocabulary could thus be shared across languages thereby facilitating joint learning.
Acknowledgements
We thank Google for supporting Preksha Nema through their Google India Ph.D. Fellowship program. We also thank Microsoft Research India for supporting Shreyas Shetty through their generous travel grant for attending the conference.
|
Do they use pretrained embeddings?
|
Yes
| 4,457
|
qasper
|
8k
|
Introduction
“Ché saetta previsa vien più lenta.”
– Dante Alighieri, Divina Commedia, Paradiso
Antisocial behavior is a persistent problem plaguing online conversation platforms; it is both widespread BIBREF0 and potentially damaging to mental and emotional health BIBREF1, BIBREF2. The strain this phenomenon puts on community maintainers has sparked recent interest in computational approaches for assisting human moderators.
Prior work in this direction has largely focused on post-hoc identification of various kinds of antisocial behavior, including hate speech BIBREF3, BIBREF4, harassment BIBREF5, personal attacks BIBREF6, and general toxicity BIBREF7. The fact that these approaches only identify antisocial content after the fact limits their practicality as tools for assisting pre-emptive moderation in conversational domains.
Addressing this limitation requires forecasting the future derailment of a conversation based on early warning signs, giving the moderators time to potentially intervene before any harm is done (BIBREF8 BIBREF8, BIBREF9 BIBREF9, see BIBREF10 BIBREF10 for a discussion). Such a goal recognizes derailment as emerging from the development of the conversation, and belongs to the broader area of conversational forecasting, which includes future-prediction tasks such as predicting the eventual length of a conversation BIBREF11, whether a persuasion attempt will eventually succeed BIBREF12, BIBREF13, BIBREF14, whether team discussions will eventually lead to an increase in performance BIBREF15, or whether ongoing counseling conversations will eventually be perceived as helpful BIBREF16.
Approaching such conversational forecasting problems, however, requires overcoming several inherent modeling challenges. First, conversations are dynamic and their outcome might depend on how subsequent comments interact with each other. Consider the example in Figure FIGREF2: while no individual comment is outright offensive, a human reader can sense a tension emerging from their succession (e.g., dismissive answers to repeated questioning). Thus a forecasting model needs to capture not only the content of each individual comment, but also the relations between comments. Previous work has largely relied on hand-crafted features to capture such relations—e.g., similarity between comments BIBREF16, BIBREF12 or conversation structure BIBREF17, BIBREF18—, though neural attention architectures have also recently shown promise BIBREF19.
The second modeling challenge stems from the fact that conversations have an unknown horizon: they can be of varying lengths, and the to-be-forecasted event can occur at any time. So when is it a good time to make a forecast? Prior work has largely proposed two solutions, both resulting in important practical limitations. One solution is to assume (unrealistic) prior knowledge of when the to-be-forecasted event takes place and extract features up to that point BIBREF20, BIBREF8. Another compromising solution is to extract features from a fixed-length window, often at the start of the conversation BIBREF21, BIBREF15, BIBREF16, BIBREF9. Choosing a catch-all window-size is however impractical: short windows will miss information in comments they do not encompass (e.g., a window of only two comments would miss the chain of repeated questioning in comments 3 through 6 of Figure FIGREF2), while longer windows risk missing the to-be-forecasted event altogether if it occurs before the end of the window, which would prevent early detection.
In this work we introduce a model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion). Our main insight is that models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics BIBREF22, BIBREF23, BIBREF24, while concomitantly being able to process the conversation as it develops (see BIBREF25 for a survey).
In order for these systems to perform well in the generative domain they need to be trained on massive amounts of (unlabeled) conversational data. The main difficulty in directly adapting these models to the supervised domain of conversational forecasting is the relative scarcity of labeled data: for most forecasting tasks, at most a few thousands labeled examples are available, insufficient for the notoriously data-hungry sequential neural models.
To overcome this difficulty, we propose to decouple the objective of learning a neural representation of conversational dynamics from the objective of predicting future events. The former can be pre-trained on large amounts of unsupervised data, similarly to how chatbots are trained. The latter can piggy-back on the resulting representation after fine-tuning it for classification using relatively small labeled data. While similar pre-train-then-fine-tune approaches have recently achieved state-of-the-art performance in a number of NLP tasks—including natural language inference, question answering, and commonsense reasoning (discussed in Section SECREF2)—to the best of our knowledge this is the first attempt at applying this paradigm to conversational forecasting.
To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted.
To summarize, in this work we:
introduce the first model for forecasting conversational events that can capture the dynamics of a conversation as it develops;
build two diverse datasets (one entirely new, one extending prior work) for the task of forecasting derailment of online conversations;
compare the performance of our model against the current state-of-the-art, and evaluate its ability to provide early warning signs.
Our work is motivated by the goal of assisting human moderators of online communities by preemptively signaling at-risk conversations that might deserve their attention. However, we caution that any automated systems might encode or even amplify the biases existing in the training data BIBREF26, BIBREF27, BIBREF28, so a public-facing implementation would need to be exhaustively scrutinized for such biases BIBREF29.
Further Related Work
Antisocial behavior. Antisocial behavior online comes in many forms, including harassment BIBREF30, cyberbullying BIBREF31, and general aggression BIBREF32. Prior work has sought to understand different aspects of such behavior, including its effect on the communities where it happens BIBREF33, BIBREF34, the actors involved BIBREF35, BIBREF36, BIBREF37, BIBREF38 and connections to the outside world BIBREF39.
Post-hoc classification of conversations. There is a rich body of prior work on classifying the outcome of a conversation after it has concluded, or classifying conversational events after they happened. Many examples exist, but some more closely related to our present work include identifying the winner of a debate BIBREF40, BIBREF41, BIBREF42, identifying successful negotiations BIBREF21, BIBREF43, as well as detecting whether deception BIBREF44, BIBREF45, BIBREF46 or disagreement BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51 has occurred.
Our goal is different because we wish to forecast conversational events before they happen and while the conversation is still ongoing (potentially allowing for interventions). Note that some post-hoc tasks can also be re-framed as forecasting tasks (assuming the existence of necessary labels); for instance, predicting whether an ongoing conversation will eventually spark disagreement BIBREF18, rather than detecting already-existing disagreement.
Conversational forecasting. As described in Section SECREF1, prior work on forecasting conversational outcomes and events has largely relied on hand-crafted features to capture aspects of conversational dynamics. Example feature sets include statistical measures based on similarity between utterances BIBREF16, sentiment imbalance BIBREF20, flow of ideas BIBREF20, increase in hostility BIBREF8, reply rate BIBREF11 and graph representations of conversations BIBREF52, BIBREF17. By contrast, we aim to automatically learn neural representations of conversational dynamics through pre-training.
Such hand-crafted features are typically extracted from fixed-length windows of the conversation, leaving unaddressed the problem of unknown horizon. While some work has trained multiple models for different window-lengths BIBREF8, BIBREF18, they consider these models to be independent and, as such, do not address the issue of aggregating them into a single forecast (i.e., deciding at what point to make a prediction). We implement a simple sliding windows solution as a baseline (Section SECREF5).
Pre-training for NLP. The use of pre-training for natural language tasks has been growing in popularity after recent breakthroughs demonstrating improved performance on a wide array of benchmark tasks BIBREF53, BIBREF54. Existing work has generally used a language modeling objective as the pre-training objective; examples include next-word prediction BIBREF55, sentence autoencoding, BIBREF56, and machine translation BIBREF57. BERT BIBREF58 introduces a variation on this in which the goal is to predict the next sentence in a document given the current sentence. Our pre-training objective is similar in spirit, but operates at a conversation level, rather than a document level. We hence view our objective as conversational modeling rather than (only) language modeling. Furthermore, while BERT's sentence prediction objective is framed as a multiple-choice task, our objective is framed as a generative task.
Derailment Datasets
We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.
Wikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout.
A series of controls are implemented to prevent models from picking up on trivial correlations. To prevent models from capturing topic-specific information (e.g., political conversations are more likely to derail), each attack-containing conversation is paired with a clean conversation from the same talk page, where the talk page serves as a proxy for topic. To force models to actually capture conversational dynamics rather than detecting already-existing toxicity, human annotations are used to ensure that all comments preceding a personal attack are civil.
To the ends of more effective model training, we elected to expand the `Conversations Gone Awry' dataset, using the original annotation procedure. Since we found that the original data skewed towards shorter conversations, we focused this crowdsourcing run on longer conversations: ones with 4 or more comments preceding the attack. Through this additional crowdsourcing, we expand the dataset to 4,188 conversations, which we are publicly releasing as part of the Cornell Conversational Analysis Toolkit (ConvoKit).
We perform an 80-20-20 train/dev/test split, ensuring that paired conversations end up in the same split in order to preserve the topic control. Finally, we randomly sample another 1 million conversations from WikiConv to use for the unsupervised pre-training of the generative component.
Reddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”.
Though the lack of post-hoc annotation limits the degree to which we can impose controls on the data (e.g., some conversations may contain toxic comments not flagged by the moderators) we do reproduce as many of the Wikipedia data's controls as we can. Namely, we replicate the topic control pairing by choosing pairs of positive and negative examples that belong to the same top-level post, following BIBREF12; and enforce that the removed comment was made by a user who was previously involved in the conversation. This process results in 6,842 conversations, to which we again apply a pair-preserving 80-20-20 split. Finally, we gather over 600,000 conversations that do not include any removed comment, for unsupervised pre-training.
Conversational Forecasting Model
We now describe our general model for forecasting future conversational events. Our model integrates two components: (a) a generative dialog model that learns to represent conversational dynamics in an unsupervised fashion; and (b) a supervised component that fine-tunes this representation to forecast future events. Figure FIGREF13 provides an overview of the proposed architecture, henceforth CRAFT (Conversational Recurrent Architecture for ForecasTing).
Terminology. For modeling purposes, we treat a conversation as a sequence of $N$ comments $C = \lbrace c_1,\dots ,c_N\rbrace $. Each comment, in turn, is a sequence of tokens, where the number of tokens may vary from comment to comment. For the $n$-th comment ($1 \le n \le N)$, we let $M_n$ denote the number of tokens. Then, a comment $c_n$ can be represented as a sequence of $M_n$ tokens: $c_n = \lbrace w_1,\dots ,w_{M_n}\rbrace $.
Generative component. For the generative component of our model, we use a hierarchical recurrent encoder-decoder (HRED) architecture BIBREF60, a modified version of the popular sequence-to-sequence (seq2seq) architecture BIBREF61 designed to account for dependencies between consecutive inputs. BIBREF23 showed that HRED can successfully model conversational context by encoding the temporal structure of previously seen comments, making it an ideal fit for our use case. Here, we provide a high-level summary of the HRED architecture, deferring deeper technical discussion to BIBREF60 and BIBREF23.
An HRED dialog model consists of three components: an utterance encoder, a context encoder, and a decoder. The utterance encoder is responsible for generating semantic vector representations of comments. It consists of a recurrent neural network (RNN) that reads a comment token-by-token, and on each token $w_m$ updates a hidden state $h^{\text{enc}}$ based on the current token and the previous hidden state:
where $f^{\text{RNN}}$ is a nonlinear gating function (our implementation uses GRU BIBREF62). The final hidden state $h^{\text{enc}}_M$ can be viewed as a vector encoding of the entire comment.
Running the encoder on each comment $c_n$ results in a sequence of $N$ vector encodings. A second encoder, the context encoder, is then run over this sequence:
Each hidden state $h^{\text{con}}_n$ can then be viewed as an encoding of the full conversational context up to and including the $n$-th comment. To generate a response to comment $n$, the context encoding $h^{\text{con}}_n$ is used to initialize the hidden state $h^{\text{dec}}_{0}$ of a decoder RNN. The decoder produces a response token by token using the following recurrence:
where $f^{\text{out}}$ is some function that outputs a probability distribution over words; we implement this using a simple feedforward layer. In our implementation, we further augment the decoder with attention BIBREF63, BIBREF64 over context encoder states to help capture long-term inter-comment dependencies. This generative component can be pre-trained using unlabeled conversational data.
Prediction component. Given a pre-trained HRED dialog model, we aim to extend the model to predict from the conversational context whether the to-be-forecasted event will occur. Our predictor consists of a multilayer perceptron (MLP) with 3 fully-connected layers, leaky ReLU activations between layers, and sigmoid activation for output. For each comment $c_n$, the predictor takes as input the context encoding $h^{\text{con}}_n$ and forwards it through the MLP layers, resulting in an output score that is interpreted as a probability $p_{\text{event}}(c_{n+1})$ that the to-be-forecasted event will happen (e.g., that the conversation will derail).
Training the predictive component starts by initializing the weights of the encoders to the values learned in pre-training. The main training loop then works as follows: for each positive sample—i.e., a conversation containing an instance of the to-be-forecasted event (e.g., derailment) at comment $c_e$—we feed the context $c_1,\dots ,c_{e-1}$ through the encoder and classifier, and compute cross-entropy loss between the classifier output and expected output of 1. Similarly, for each negative sample—i.e., a conversation where none of the comments exhibit the to-be-forecasted event and that ends with $c_N$—we feed the context $c_1,\dots ,c_{N-1}$ through the model and compute loss against an expected output of 0.
Note that the parameters of the generative component are not held fixed during this process; instead, backpropagation is allowed to go all the way through the encoder layers. This process, known as fine-tuning, reshapes the representation learned during pre-training to be more directly useful to prediction BIBREF55.
We implement the model and training code using PyTorch, and we are publicly releasing our implementation and the trained models together with the data as part of ConvoKit.
Forecasting Derailment
We evaluate the performance of CRAFT in the task of forecasting conversational derailment in both the Wikipedia and CMV scenarios. To this end, for each of these datasets we pre-train the generative component on the unlabeled portion of the data and fine-tune it on the labeled training split (data size detailed in Section SECREF3).
In order to evaluate our sequential system against conversational-level ground truth, we need to aggregate comment level predictions. If any comment in the conversation triggers a positive prediction—i.e., $p_{\text{event}}(c_{n+1})$ is greater than a threshold learned on the development split—then the respective conversation is predicted to derail. If this forecast is triggered in a conversation that actually derails, but before the derailment actually happens, then the conversation is counted as a true positive; otherwise it is a false positive. If no positive predictions are triggered for a conversation, but it actually derails then it counts as a false negative; if it does not derail then it is a true negative.
Fixed-length window baselines. We first seek to compare CRAFT to existing, fixed-length window approaches to forecasting. To this end, we implement two such baselines: Awry, which is the state-of-the-art method proposed in BIBREF9 based on pragmatic features in the first comment-reply pair, and BoW, a simple bag-of-words baseline that makes a prediction using TF-IDF weighted bag-of-words features extracted from the first comment-reply pair.
Online forecasting baselines. Next, we consider simpler approaches for making forecasts as the conversations happen (i.e., in an online fashion). First, we propose Cumulative BoW, a model that recomputes bag-of-words features on all comments seen thus far every time a new comment arrives. While this approach does exhibit the desired behavior of producing updated predictions for each new comment, it fails to account for relationships between comments.
This simple cumulative approach cannot be directly extended to models whose features are strictly based on a fixed number of comments, like Awry. An alternative is to use a sliding window: for a feature set based on a window of $W$ comments, upon each new comment we can extract features from a window containing that comment and the $W-1$ comments preceding it. We apply this to the Awry method and call this model Sliding Awry. For both these baselines, we aggregate comment-level predictions in the same way as in our main model.
CRAFT ablations. Finally, we consider two modified versions of the CRAFT model in order to evaluate the impact of two of its key components: (1) the pre-training step, and (2) its ability to capture inter-comment dependencies through its hierarchical memory.
To evaluate the impact of pre-training, we train the prediction component of CRAFT on only the labeled training data, without first pre-training the encoder layers with the unlabeled data. We find that given the relatively small size of labeled data, this baseline fails to successfully learn, and ends up performing at the level of random guessing. This result underscores the need for the pre-training step that can make use of unlabeled data.
To evaluate the impact of the hierarchical memory, we implement a simplified version of CRAFT where the memory size of the context encoder is zero (CRAFT $-$ CE), thus effectively acting as if the pre-training component is a vanilla seq2seq model. In other words, this model cannot capture inter-comment dependencies, and instead at each step makes a prediction based only on the utterance encoding of the latest comment.
Results. Table TABREF17 compares CRAFT to the baselines on the test splits (random baseline is 50%) and illustrates several key findings. First, we find that unsurprisingly, accounting for full conversational context is indeed helpful, with even the simple online baselines outperforming the fixed-window baselines. On both datasets, CRAFT outperforms all baselines (including the other online models) in terms of accuracy and F1. Furthermore, although it loses on precision (to CRAFT $-$ CE) and recall (to Cumulative BoW) individually on the Wikipedia data, CRAFT has the superior balance between the two, having both a visibly higher precision-recall curve and larger area under the curve (AUPR) than the baselines (Figure FIGREF20). This latter property is particularly useful in a practical setting, as it allows moderators to tune model performance to some desired precision without having to sacrifice as much in the way of recall (or vice versa) compared to the baselines and pre-existing solutions.
Analysis
We now examine the behavior of CRAFT in greater detail, to better understand its benefits and limitations. We specifically address the following questions: (1) How much early warning does the the model provide? (2) Does the model actually learn an order-sensitive representation of conversational context?
Early warning, but how early? The recent interest in forecasting antisocial behavior has been driven by a desire to provide pre-emptive, actionable warning to moderators. But does our model trigger early enough for any such practical goals?
For each personal attack correctly forecasted by our model, we count the number of comments elapsed between the time the model is first triggered and the attack. Figure FIGREF22 shows the distribution of these counts: on average, the model warns of an attack 3 comments before it actually happens (4 comments for CMV). To further evaluate how much time this early warning would give to the moderator, we also consider the difference in timestamps between the comment where the model first triggers and the comment containing the actual attack. Over 50% of conversations get at least 3 hours of advance warning (2 hours for CMV). Moreover, 39% of conversations get at least 12 hours of early warning before they derail.
Does order matter? One motivation behind the design of our model was the intuition that comments in a conversation are not independent events; rather, the order in which they appear matters (e.g., a blunt comment followed by a polite one feels intuitively different from a polite comment followed by a blunt one). By design, CRAFT has the capacity to learn an order-sensitive representation of conversational context, but how can we know that this capacity is actually used? It is conceivable that the model is simply computing an order-insensitive “bag-of-features”. Neural network models are notorious for their lack of transparency, precluding an analysis of how exactly CRAFT models conversational context. Nevertheless, through two simple exploratory experiments, we seek to show that it does not completely ignore comment order.
The first experiment for testing whether the model accounts for comment order is a prefix-shuffling experiment, visualized in Figure FIGREF23. For each conversation that the model predicts will derail, let $t$ denote the index of the triggering comment, i.e., the index where the model first made a derailment forecast. We then construct synthetic conversations by taking the first $t-1$ comments (henceforth referred to as the prefix) and randomizing their order. Finally, we count how often the model no longer predicts derailment at index $t$ in the synthetic conversations. If the model were ignoring comment order, its prediction should remain unchanged (as it remains for the Cumulative BoW baseline), since the actual content of the first $t$ comments has not changed (and CRAFT inference is deterministic). We instead find that in roughly one fifth of cases (12% for CMV) the model changes its prediction on the synthetic conversations. This suggests that CRAFT learns an order-sensitive representation of context, not a mere “bag-of-features”.
To more concretely quantify how much this order-sensitive context modeling helps with prediction, we can actively prevent the model from learning and exploiting any order-related dynamics. We achieve this through another type of shuffling experiment, where we go back even further and shuffle the comment order in the conversations used for pre-training, fine-tuning and testing. This procedure preserves the model's ability to capture signals present within the individual comments processed so far, as the utterance encoder is unaffected, but inhibits it from capturing any meaningful order-sensitive dynamics. We find that this hurts the model's performance (65% accuracy for Wikipedia, 59.5% for CMV), lowering it to a level similar to that of the version where we completely disable the context encoder.
Taken together, these experiments provide evidence that CRAFT uses its capacity to model conversational context in an order-sensitive fashion, and that it makes effective use of the dynamics within. An important avenue for future work would be developing more transparent models that can shed light on exactly what kinds of order-related features are being extracted and how they are used in prediction.
Conclusions and Future Work
In this work, we introduced a model for forecasting conversational events that processes comments as they happen and takes the full conversational context into account to make an updated prediction at each step. This model fills a void in the existing literature on conversational forecasting, simultaneously addressing the dual challenges of capturing inter-comment dynamics and dealing with an unknown horizon. We find that our model achieves state-of-the-art performance on the task of forecasting derailment in two different datasets that we release publicly. We further show that the resulting system can provide substantial prior notice of derailment, opening up the potential for preemptive interventions by human moderators BIBREF65.
While we have focused specifically on the task of forecasting derailment, we view this work as a step towards a more general model for real-time forecasting of other types of emergent properties of conversations. Follow-up work could adapt the CRAFT architecture to address other forecasting tasks mentioned in Section SECREF2—including those for which the outcome is extraneous to the conversation. We expect different tasks to be informed by different types of inter-comment dynamics, and further architecture extensions could add additional supervised fine-tuning in order to direct it to focus on specific dynamics that might be relevant to the task (e.g., exchange of ideas between interlocutors or stonewalling).
With respect to forecasting derailment, there remain open questions regarding what human moderators actually desire from an early-warning system, which would affect the design of a practical system based on this work. For instance, how early does a warning need to be in order for moderators to find it useful? What is the optimal balance between precision, recall, and false positive rate at which such a system is truly improving moderator productivity rather than wasting their time through false positives? What are the ethical implications of such a system? Follow-up work could run a user study of a prototype system with actual moderators to address these questions.
A practical limitation of the current analysis is that it relies on balanced datasets, while derailment is a relatively rare event for which a more restrictive trigger threshold would be appropriate. While our analysis of the precision-recall curve suggests the system is robust across multiple thresholds ($AUPR=0.7$), additional work is needed to establish whether the recall tradeoff would be acceptable in practice.
Finally, one major limitation of the present work is that it assigns a single label to each conversation: does it derail or not? In reality, derailment need not spell the end of a conversation; it is possible that a conversation could get back on track, suffer a repeat occurrence of antisocial behavior, or any number of other trajectories. It would be exciting to consider finer-grained forecasting of conversational trajectories, accounting for the natural—and sometimes chaotic—ebb-and-flow of human interactions.
Acknowledgements. We thank Caleb Chiam, Liye Fu, Lillian Lee, Alexandru Niculescu-Mizil, Andrew Wang and Justine Zhang for insightful conversations (with unknown horizon), Aditya Jha for his great help with implementing and running the crowd-sourcing tasks, Thomas Davidson and Claire Liang for exploratory data annotation, as well as the anonymous reviewers for their helpful comments. This work is supported in part by the NSF CAREER award IIS-1750615 and by the NSF Grant SES-1741441.
|
What are two datasets model is applied to?
|
`Conversations Gone Awry' dataset, subreddit ChangeMyView
| 4,718
|
qasper
|
8k
|
Introduction
With the availability of rich data on users' locations, profiles and search history, personalization has become the leading trend in large-scale information retrieval. However, efficiency through personalization is not yet the most suitable model when tackling domain-specific searches. This is due to several factors, such as the lexical and semantic challenges of domain-specific data that often include advanced argumentation and complex contextual information, the higher sparseness of relevant information sources, and the more pronounced lack of similarities between users' searches.
A recent study on expert search strategies among healthcare information professionals BIBREF0 showed that, for a given search task, they spend an average of 60 minutes per collection or database, 3 minutes to examine the relevance of each document, and 4 hours of total search time. When written in steps, their search strategy spans over 15 lines and can reach up to 105 lines.
With the abundance of information sources in the medical domain, consumers are more and more faced with a similar challenge, one that needs dedicated solutions that can adapt to the heterogeneity and specifics of health-related information.
Dedicated Question Answering (QA) systems are one of the viable solutions to this problem as they are designed to understand natural language questions without relying on external information on the users.
In the context of QA, the goal of Recognizing Question Entailment (RQE) is to retrieve answers to a premise question ( INLINEFORM0 ) by retrieving inferred or entailed questions, called hypothesis questions ( INLINEFORM1 ) that already have associated answers. Therefore, we define the entailment relation between two questions as: a question INLINEFORM2 entails a question INLINEFORM3 if every answer to INLINEFORM4 is also a correct answer to INLINEFORM5 BIBREF1 .
RQE is particularly relevant due to the increasing numbers of similar questions posted online BIBREF2 and its ability to solve differently the challenging issues of question understanding and answer extraction. In addition to being used to find relevant answers, these resources can also be used in training models able to recognize inference relations and similarity between questions.
Question similarity has recently attracted international challenges BIBREF3 , BIBREF4 and several research efforts proposing a wide range of approaches, including Logistic Regression, Recurrent Neural Networks (RNNs), Long Short Term Memory cells (LSTMs), and Convolutional Neural Networks (CNNs) BIBREF5 , BIBREF6 , BIBREF1 , BIBREF7 .
In this paper, we study question entailment in the medical domain and the effectiveness of the end-to-end RQE-based QA approach by evaluating the relevance of the retrieved answers. Although entailment was attempted in QA before BIBREF8 , BIBREF9 , BIBREF10 , as far as we know, we are the first to introduce and evaluate a full medical question answering approach based on question entailment for free-text questions. Our contributions are:
The next section is dedicated to related work on question answering, question similarity and entailment. In Section SECREF3 , we present two machine learning (ML) and deep learning (DL) methods for RQE and compare their performance using open-domain and clinical datasets. Section SECREF4 describes the new collection of medical question-answer pairs. In Section SECREF5 , we describe our RQE-based approach for QA. Section SECREF6 presents our evaluation of the retrieved answers and the results obtained on TREC 2017 LiveQA medical questions.
Background
In this section we define the RQE task and describe related work at the intersection of question answering, question similarity and textual inference.
Task Definition
The definition of Recognizing Question Entailment (RQE) can have a significant impact on QA results. In related work, the meaning associated with Natural Language Inference (NLI) varies among different tasks and events. For instance, Recognizing Textual Entailment (RTE) was addressed by the PASCAL challenge BIBREF12 , where the entailment relation has been assessed manually by human judges who selected relevant sentences "entailing" a set of hypotheses from a list of documents returned by different Information Retrieval (IR) methods. In another definition, the Stanford Natural Language Inference corpus SNLI BIBREF13 , used three classification labels for the relations between two sentences: entailment, neutral and contradiction. For the entailment label, the annotators who built the corpus were presented with an image and asked to write a caption “that is a definitely true description of the photo”. For the neutral label, they were asked to provide a caption “that might be a true description of the label”. They were asked for a caption that “is definitely a false description of the photo” for the contradiction label.
More recently, the multiNLI corpus BIBREF14 was shared in the scope of the RepEval 2017 shared task BIBREF15 . To build the corpus, annotators were presented with a premise text and asked to write three sentences. One novel sentence, which is “necessarily true or appropriate in the same situations as the premise,” for the entailment label, a sentence, which is “necessarily false or inappropriate whenever the premise is true,” for the contradiction label, and a last sentence, “where neither condition applies,” for the neutral label.
Whereas these NLI definitions might be suitable for the broad topic of text understanding, their relation to practical information retrieval or question answering systems is not straightforward.
In contrast, RQE has to be tailored to the question answering task. For instance, if the premise question is "looking for cold medications for a 30 yo woman", a RQE approach should be able to consider the more general (less restricted) question "looking for cold medications" as relevant, since its answers are relevant for the initial question, whereas "looking for medications for a 30 yo woman" is a useless contextualization. The entailment relation we are seeking in the QA context should include relevant and meaningful relaxations of contextual and semantic constraints (cf. Section SECREF13 ).
Related Work on Question Answering
Classical QA systems face two main challenges related to question analysis and answer extraction. Several QA approaches were proposed in the literature for the open domain BIBREF16 , BIBREF17 and the medical domain BIBREF18 , BIBREF19 , BIBREF20 . A variety of methods were developed for question analysis, focus (topic) recognition and question type identification BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . Similarly, many different approaches tackled document or passage retrieval and answer selection and (re)ranking BIBREF25 , BIBREF26 , BIBREF27 .
An alternative approach consists in finding similar questions or FAQs that are already answered BIBREF28 , BIBREF29 . One of the earliest question answering systems based on finding similar questions and re-using the existing answers was FAQ FINDER BIBREF30 . Another system that complements the existing Q&A services of NetWellness is SimQ BIBREF2 , which allows retrieval of similar web-based consumer health questions. SimQ uses syntactic and semantic features to compute similarity between questions, and UMLS BIBREF31 as a standardized semantic knowledge source. The system achieves 72.2% precision, 78.0% recall and 75.0% F-score on NetWellness questions. However, the method was evaluated only on one question similarity dataset, and the retrieved answers were not evaluated.
The aim of the medical task at TREC 2017 LiveQA was to develop techniques for answering complex questions such as consumer health questions, as well as to identify relevant answer sources that can comply with the sensitivity of medical information retrieval.
The CMU-OAQA system BIBREF32 achieved the best performance of 0.637 average score on the medical task by using an attentional encoder-decoder model for paraphrase identification and answer ranking. The Quora question-similarity dataset was used for training. The PRNA system BIBREF33 achieved the second best performance in the medical task with 0.49 average score using Wikipedia as the first answer source and Yahoo and Google searches as secondary answer sources. Each medical question was decomposed into several subquestions. To extract the answer from the selected text passage, a bi-directional attention model trained on the SQUAD dataset was used.
Deep neural network models have been pushing the limits of performance achieved in QA related tasks using large training datasets. The results obtained by CMU-OAQA and PRNA showed that large open-domain datasets were beneficial for the medical domain. However, the best system (CMU-OAQA) relying on the same training data obtained a score of 1.139 on the LiveQA open-domain task.
While this gap in performance can be explained in part by the discrepancies between the medical test questions and the open-domain questions, it also highlights the need for larger medical datasets to support deep learning approaches in dealing with the linguistic complexity of consumer health questions and the challenge of finding correct and complete answers.
Another technique was used by ECNU-ICA team BIBREF34 based on learning question similarity via two long short-term memory (LSTM) networks applied to obtain the semantic representations of the questions. To construct a collection of similar question pairs, they searched community question answering sites such as Yahoo! and Answers.com. In contrast, the ECNU-ICA system achieved the best performance of 1.895 in the open-domain task but an average score of only 0.402 in the medical task. As the ECNU-ICA approach also relied on a neural network for question matching, this result shows that training attention-based decoder-encoder networks on the Quora dataset generalized better to the medical domain than training LSTMs on similar questions from Yahoo! and Answers.com.
The CMU-LiveMedQA team BIBREF20 designed a specific system for the medical task. Using only the provided training datasets and the assumption that each question contains only one focus, the CMU-LiveMedQA system obtained an average score of 0.353. They used a convolutional neural network (CNN) model to classify a question into a restricted set of 10 question types and crawled "relevant" online web pages to find the answers. However, the results were lower than those achieved by the systems relying on finding similar answered questions. These results support the relevance of similar question matching for the end-to-end QA task as a new way of approaching QA instead of the classical QA approaches based on Question Analysis and Answer Retrieval.
Related Work on Question Similarity and Entailment
Several efforts focused on recognizing similar questions. Jeon et al. BIBREF35 showed that a retrieval model based on translation probabilities learned from a question and answer archive can recognize semantically similar questions. Duan et al. BIBREF36 proposed a dedicated language modeling approach for question search, using question topic (user's interest) and question focus (certain aspect of the topic).
Lately, these efforts were supported by a task on Question-Question similarity introduced in the community QA challenge at SemEval (task 3B) BIBREF3 . Given a new question, the task focused on reranking all similar questions retrieved by a search engine, assuming that the answers to the similar questions will be correct answers for the new question. Different machine learning and deep learning approaches were tested in the scope of SemEval 2016 BIBREF3 and 2017 BIBREF4 task 3B. The best performing system in 2017 achieved a MAP of 47.22% using supervised Logistic Regression that combined different unsupervised similarity measures such as Cosine and Soft-Cosine BIBREF37 . The second best system achieved 46.93% MAP with a learning-to-rank method using Logistic Regression and a rich set of features including lexical and semantic features as well as embeddings generated by different neural networks (siamese, Bi-LSTM, GRU and CNNs) BIBREF38 . In the scope of this challenge, a dataset was collected from Qatar Living forum for training. We refer to this dataset as SemEval-cQA.
In another effort, an answer-based definition of RQE was proposed and tested BIBREF1 . The authors introduced a dataset of clinical questions and used a feature-based method that provided an Accuracy of 75% on consumer health questions. We will call this dataset Clinical-QE. Dos Santos et al. BIBREF5 proposed a new approach to retrieve semantically equivalent questions combining a bag-of-words representation with a distributed vector representation created by a CNN and user data collected from two Stack Exchange communities. Lei et al. BIBREF7 proposed a recurrent and convolutional model (gated convolution) to map questions to their semantic representations. The models were pre-trained within an encoder-decoder framework.
RQE Approaches and Experiments
The choice of two methods for our empirical study is motivated by the best performance achieved by Logistic Regression in question-question similarity at SemEval 2017 (best system BIBREF37 and second best system BIBREF38 ), and the high performance achieved by neural networks on larger datasets such as SNLI BIBREF13 , BIBREF39 , BIBREF40 , BIBREF41 . We first define the RQE task, then present the two approaches, and evaluate their performance on five different datasets.
Definition
In the context of QA, the goal of RQE is to retrieve answers to a new question by retrieving entailed questions with associated answers. We therefore define question entailment as:
a question INLINEFORM0 entails a question INLINEFORM1 if every answer to INLINEFORM2 is also a complete or partial answer to INLINEFORM3 .
We present below two examples of consumer health questions INLINEFORM0 and entailed questions INLINEFORM1 :
Example 1 (each answer to the entailed question B1 is a complete answer to A1):
A1: What is the latest news on tennitis, or ringing in the ear, I am 75 years old and have had ringing in the ear since my mid 5os. Thank you.
B1: What is the latest research on Tinnitus?
Example 2 (each answer to the entailed question B2 is a partial answer to A2):
A2: My mother has been diagnosed with Alzheimer's, my father is not of the greatest health either and is the main caregiver for my mother. My question is where do we start with attempting to help our parents w/ the care giving and what sort of financial options are there out there for people on fixed incomes.
B2: What resources are available for Alzheimer's caregivers?
The inclusion of partial answers in the definition of question entailment also allows efficient relaxation of the contextual constraints of the original question INLINEFORM0 to retrieve relevant answers from entailed, but less restricted, questions.
Deep Learning Model
To recognize entailment between two questions INLINEFORM0 (premise) and INLINEFORM1 (hypothesis), we adapted the neural network proposed by Bowman et al. BIBREF13 . Our DL model, presented in Figure FIGREF20 , consists of three 600d ReLU layers, with a bottom layer taking the concatenated sentence representations as input and a top layer feeding a softmax classifier. The sentence embedding model sums the Recurrent neural network (RNN) embeddings of its words. The word embeddings are first initialized with pretrained GloVe vectors. This adaptation provided the best performance in previous experiments with RQE data.
GloVe is an unsupervised learning algorithm to generate vector representations for words BIBREF42 . Training is performed on aggregated word co-occurrence statistics from a large corpus, and the resulting representations show interesting linear substructures of the word vector space. We use the pretrained common crawl version with 840B tokens and 300d vectors, which are not updated during training.
Logistic Regression Classifier
In this feature-based approach, we use Logistic Regression to classify question pairs into entailment or no-entailment. Logistic Regression achieved good results on this specific task and outperformed other statistical learning algorithms such as SVM and Naive Bayes. In a preprocessing step, we remove stop words and perform word stemming using the Porter algorithm BIBREF43 for all ( INLINEFORM0 , INLINEFORM1 ) pairs.
We use a list of nine features, selected after several experiments on RTE datasets BIBREF12 . We compute five similarity measures between the pre-processed questions and use their values as features. We use Word Overlap, the Dice coefficient based on the number of common bigrams, Cosine, Levenshtein, and the Jaccard similarities. Our feature list also includes the maximum and average values obtained with these measures and the question length ratio (length( INLINEFORM0 )/length( INLINEFORM1 )). We compute a morphosyntactic feature indicating the number of common nouns and verbs between INLINEFORM2 and INLINEFORM3 . TreeTagger BIBREF44 was used for POS tagging.
For RQE, we add an additional feature specific to the question type. We use a dictionary lookup to map triggers to the question type (e.g. Treatment, Prognosis, Inheritance). Triggers are identified for each question type based on a manual annotation of a set of medical questions (cf. Section SECREF36 ). This feature has three possible values: 2 (Perfect match between INLINEFORM0 type(s) and INLINEFORM1 type(s)), 1 (Overlap between INLINEFORM2 type(s) and INLINEFORM3 type(s)) and 0 (No common types).
Datasets Used for the RQE Study
We evaluate the RQE methods (i.e. deep learning model and logistic regression classifier) using two datasets of sentence pairs (SNLI and multiNLI), and three datasets of question pairs (Quora, Clinical-QE, and SemEval-cQA).
The Stanford Natural Language Inference corpus (SNLI) BIBREF13 contains 569,037 sentence pairs written by humans based on image captioning. The training set of the MultiNLI corpus BIBREF14 consists of 393,000 pairs of sentences from five genres of written and spoken English (e.g. Travel, Government). Two other "matched" and "mismatched" sets are also available for development (20,000 pairs). Both SNLI and multiNLI consider three types of relationships between sentences: entailment, neutral and contradiction. We converted the contradiction and neutral labels to the same non-entailment class.
The QUORA dataset of similar questions was recently published with 404,279 question pairs. We randomly selected three distinct subsets (80%/10%/10%) for training (323,423 pairs), development (40,428 pairs) and test (40,428 pairs).
The clinical-QE dataset BIBREF1 contains 8,588 question pairs and was constructed using 4,655 clinical questions asked by family doctors BIBREF45 . We randomly selected three distinct subsets (80%/10%/10%) for training (6,870 pairs), development (859 pairs) and test (859 pairs).
The question similarity dataset of SemEval 2016 Task 3B (SemEval-cQA) BIBREF3 contains 3,869 question pairs and aims to re-rank a list of related questions according to their similarity to the original question. The same dataset was used for SemEval 2017 Task 3 BIBREF4 .
To construct our test dataset, we used a publicly shared set of Consumer Health Questions (CHQs) received by the U.S. National Library of Medicine (NLM), and annotated with named entities, question types, and focus BIBREF46 , BIBREF47 . The CHQ dataset consists of 1,721 consumer information requests manually annotated with subquestions, each identified by a question type and a focus.
First, we selected automatically harvested FAQs, from U.S. National Institutes of Health (NIH) websites, that share both the same focus and the same question type with the CHQs. As FAQs are most often very short, we first assume that the CHQ entails the FAQ. Two sets of pairs were constructed: (i) positive pairs of CHQs and FAQs sharing at least one common question type and the question focus, and (ii) negative pairs corresponding to a focus mismatch or type mismatch. For each category of negative examples, we randomly selected the same number of pairs for a balanced dataset. Then, we manually validated the constructed pairs and corrected the positive and negative labels when needed. The final RQE dataset contains 850 CHQ-FAQ pairs with 405 positive and 445 negative pairs. Table TABREF26 presents examples from the five training datasets (SNLI, MultiNLI, SemEval-cQA, Clinical-QE and Quora) and the new test dataset of medical CHQ-FAQ pairs.
Results of RQE Approaches
In the first experiment, we evaluated the DL and ML methods on SNLI, multi-NLI, Quora, and Clinical-QE. For the datasets that did not have a development and test sets, we randomly selected two sets, each amounting to 10% of the data, for test and development, and used the remaining 80% for training. For MultiNLI, we used the dev1-matched set for validation and the dev2-mismatched set for testing.
Table TABREF28 presents the results of the first experiment. The DL model with GloVe word embeddings achieved better results on three datasets, with 82.80% Accuracy on SNLI, 78.52% Accuracy on MultiNLI, and 83.62% Accuracy on Quora. Logistic Regression achieved the best Accuracy of 98.60% on Clinical-RQE. We also performed a 10-fold cross-validation on the full Clinical-QE data of 8,588 question pairs, which gave 98.61% Accuracy.
In the second experiment, we used these datasets for training only and compared their performance on our test set of 850 consumer health questions. Table TABREF29 presents the results of this experiment. Logistic Regression trained on the clinical-RQE data outperformed DL models trained on all datasets, with 73.18% Accuracy.
To validate further the performance of the LR method, we evaluated it on question similarity detection. A typical approach to this task is to use an IR method to find similar question candidates, then a more sophisticated method to select and re-rank the similar questions. We followed a similar approach for this evaluation by combining the LR method with the IR baseline provided in the context of SemEval-cQA. The hybrid method combines the score provided by the Logistic Regression model and the reciprocal rank from the IR baseline using a weight-based combination:
INLINEFORM0
The weight INLINEFORM0 was set empirically through several tests on the cQA-2016 development set ( INLINEFORM1 ). Table TABREF30 presents the results on the cQA-2016 and cQA-2017 test datasets. The hybrid method (LR+IR) provided the best results on both datasets. On the 2016 test data, the LR+IR method outperformed the best system in all measures, with 80.57% Accuracy and 77.47% MAP (official system ranking measure in SemEval-cQA). On the cQA-2017 test data, the LR+IR method obtained 44.66% MAP and outperformed the cQA-2017 best system in Accuracy with 67.27%.
Discussion of RQE Results
When trained and tested on the same corpus, the DL model with GloVe embeddings gave the best results on three datasets (SNLI, MultiNLI and Quora). Logistic Regression gave the best Accuracy on the Clinical-RQE dataset with 98.60%. When tested on our test set (850 medical CHQs-FAQs pairs), Logistic Regression trained on Clinical-QE gave the best performance with 73.18% Accuracy.
The SNLI and multi-NLI models did not perform well when tested on medical RQE data. We performed additional evaluations using the RTE-1, RTE-2 and RTE-3 open-domain datasets provided by the PASCAL challenge and the results were similar. We have also tested the SemEval-cQA-2016 model and had a similar drop in performance on RQE data. This could be explained by the different types of data leading to wrong internal conceptualizations of medical terms and questions in the deep neural layers. This performance drop could also be caused by the complexity of the test consumer health questions that are often composed of several subquestions, contain contextual information, and may contain misspellings and ungrammatical sentences, which makes them more difficult to process BIBREF48 . Another aspect is the semantics of the task as discussed in Section SECREF6 . The definition of textual entailment in open-domain may not quite apply to question entailment due to the strict semantics. Also the general textual entailment definitions refer only to the premise and hypothesis, while the definition of RQE for question answering relies on the relationship between the sets of answers of the compared questions.
Building a Medical QA Collection from Trusted Resources
A RQE-based QA system requires a collection of question-answer pairs to map new user questions to the existing questions with an RQE approach, rank the retrieved questions, and present their answers to the user.
Method
To construct trusted medical question-answer pairs, we crawled websites from the National Institutes of Health (cf. Section SECREF56 ). Each web page describes a specific topic (e.g. name of a disease or a drug), and often includes synonyms of the main topic that we extracted during the crawl.
We constructed hand-crafted patterns for each website to automatically generate the question-answer pairs based on the document structure and the section titles. We also annotated each question with the associated focus (topic of the web page) as well as the question type identified with the designed patterns (cf. Section SECREF36 ).
To provide additional information about the questions that could be used for diverse IR and NLP tasks, we automatically annotated the questions with the focus, its UMLS Concept Unique Identifier (CUI) and Semantic Type. We combined two methods to recognize named entities from the titles of the crawled articles and their associated UMLS CUIs: (i) exact string matching to the UMLS Metathesaurus, and (ii) MetaMap Lite BIBREF49 . We then used the UMLS Semantic Network to retrieve the associated semantic types and groups.
Question Types
The question types were derived after the manual evaluation of 1,721 consumer health questions. Our taxonomy includes 16 types about Diseases, 20 types about Drugs and one type (Information) for the other named entities such as Procedures, Medical exams and Treatments. We describe below the considered question types and examples of associated question patterns.
Question Types about Diseases (16): Information, Research (or Clinical Trial), Causes, Treatment, Prevention, Diagnosis (Exams and Tests), Prognosis, Complications, Symptoms, Inheritance, Susceptibility, Genetic changes, Frequency, Considerations, Contact a medical professional, Support Groups.
Examples:
What research (or clinical trial) is being done for DISEASE?
What is the outlook for DISEASE?
How many people are affected by DISEASE?
When to contact a medical professional about DISEASE?
Who is at risk for DISEASE?
Where to find support for people with DISEASE?
Question Types About Drugs (20): Information, Interaction with medications, Interaction with food, Interaction with herbs and supplements, Important warning, Special instructions, Brand names, How does it work, How effective is it, Indication, Contraindication, Learn more, Side effects, Emergency or overdose, Severe reaction, Forget a dose, Dietary, Why get vaccinated, Storage and disposal, Usage, Dose.
Examples:
Are there interactions between DRUG and herbs and supplements?
What important warning or information should I know about DRUG?
Are there safety concerns or special precautions about DRUG?
What is the action of DRUG and how does it work?
Who should get DRUG and why is it prescribed?
What to do in case of a severe reaction to DRUG?
Question Type for other medical entities (e.g. Procedure, Exam, Treatment): Information.
What is Coronary Artery Bypass Surgery?
What are Liver Function Tests?
Medical Resources
We used 12 trusted websites to construct a collection of question-answer pairs. For each website, we extracted the free text of each article as well as the synonyms of the article focus (topic). These resources and their brief descriptions are provided below:
National Cancer Institute (NCI) : We extracted free text from 116 articles on various cancer types (729 QA pairs). We manually restructured the content of the articles to generate complete answers (e.g. a full answer about the treatment of all stages of a specific type of cancer). Figure FIGREF54 presents examples of QA pairs generated from a NCI article.
Genetic and Rare Diseases Information Center (GARD): This resource contains information about various aspects of genetic/rare diseases. We extracted all disease question/answer pairs from 4,278 topics (5,394 QA pairs).
Genetics Home Reference (GHR): This NLM resource contains consumer-oriented information about the effects of genetic variation on human health. We extracted 1,099 articles about diseases from this resource (5,430 QA pairs).
MedlinePlus Health Topics: This portion of MedlinePlus contains information on symptoms, causes, treatment and prevention for diseases, health conditions and wellness issues. We extracted the free texts in summary sections of 981 articles (981 QA pairs).
National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) : We extracted text from 174 health information pages on diseases studied by this institute (1,192 QA pairs).
National Institute of Neurological Disorders and Stroke (NINDS): We extracted free text from 277 information pages on neurological and stroke-related diseases from this resource (1,104 QA pairs).
NIHSeniorHealth : This website contains health and wellness information for older adults. We extracted 71 articles from this resource (769 QA pairs).
National Heart, Lung, and Blood Institute (NHLBI) : We extracted text from 135 articles on diseases, tests, procedures, and other relevant topics on disorders of heart, lung, blood, and sleep (559 QA pairs).
Centers for Disease Control and Prevention (CDC) : We extracted text from 152 articles on diseases and conditions (270 QA pairs).
MedlinePlus A.D.A.M. Medical Encyclopedia: This resource contains 4,366 articles about conditions, tests, and procedures. 17,348 QA pairs were extracted from this resource. Figure FIGREF55 presents examples of QA pairs generated from A.D.A.M encyclopedia.
MedlinePlus Drugs: We extracted free text from 1,316 articles about Drugs and generated 12,889 QA pairs.
MedlinePlus Herbs and Supplements: We extracted free text from 99 articles and generated 792 QA pairs.
The final collection contains 47,457 annotated question-answer pairs about Diseases, Drugs and other named entities (e.g. Tests) extracted from these 12 trusted resources.
The Proposed Entailment-based QA System
Our goal is to generate a ranked list of answers for a given Premise Question INLINEFORM0 by ranking the recognized Hypothesis Questions INLINEFORM1 . Based on the RQE experiments above (Section SECREF27 ), we selected Logistic Regression trained on the clinical-RQE dataset to recognize entailed questions and rank them with their classification scores.
RQE-based QA Approach
Classifying the full QA collection for each test question is not feasible for real-time applications. Therefore, we first filter the questions with an IR method to retrieve candidate questions, then classify them as entailed (or not) by the user/test question. Based on the positive results of the combination method tested on SemEval-cQA data (Section SECREF27 ), we adopted a combination method to merge the results obtained by the search engine and the RQE scores. The answers are then combined from both methods and ranked using an aggregate score. Figure FIGREF82 presents the overall architecture of the proposed QA system. We describe each module in more details next.
Finding Similar Question Candidates
For each premise question INLINEFORM0 , we use the Terrier search engine to retrieve INLINEFORM1 relevant question candidates INLINEFORM2 and then apply the RQE classifier to predict the labels for the pairs ( INLINEFORM3 , INLINEFORM4 ).
We indexed the questions of our QA collection without the associated answers. In order to improve the indexing and the performance of question retrieval, we also indexed the synonyms of the question focus and the triggers of the question type with each question. This choice allowed us to avoid the shortcomings of query expansion, including incorrect or irrelevant synonyms and the increased execution time. The synonyms of the question focus (topic) were extracted automatically from the QA collection. The triggers of each question type were defined manually in the question types taxonomy. Below are two examples of indexed questions from our QA collection, with the automatically added focus synonyms and question type triggers:
What are the treatments for Torticollis?
Focus: Torticollis. Question type: Treatment.
Added focus synonyms: "Spasmodic torticollis, Wry neck, Loxia, Cervical dystonia". Added question type triggers: "relieve, manage, cure, remedy, therapy".
What is the outlook for Legionnaire disease?
Focus: Legionnaire disease. Question Type: Prognosis.
Added focus synonyms: "Legionella pneumonia, Pontiac fever, Legionellosis". Added question type triggers: "prognosis, life expectancy".
The IR task consists of retrieving hypothesis questions INLINEFORM0 relevant to the submitted question INLINEFORM1 . As fusion of IR result has shown good performance in different tracks in TREC, we merge the results of the TF-IDF weighting function and the In-expB2 DFR model BIBREF50 .
Let INLINEFORM0 = INLINEFORM1 , INLINEFORM2 , ..., INLINEFORM3 be the set of INLINEFORM4 questions retrieved by the first IR model INLINEFORM5 and INLINEFORM6 = INLINEFORM7 , INLINEFORM8 , ..., INLINEFORM9 be the set of INLINEFORM10 questions retrieved by the second IR model INLINEFORM11 . We merge both sets by summing the scores of each retrieved question INLINEFORM12 in both INLINEFORM13 and INLINEFORM14 lists, then we re-rank the hypothesis questions INLINEFORM15 .
Combining IR and RQE Methods
The IR models and the RQE Logistic Regression model bring different perspectives to the search for relevant candidate questions. In particular, question entailment allows understanding the relations between the important terms, whereas the traditional IR methods identify the important terms, but will not notice if the relations are opposite. Moreover, some of the question types that the RQE classifier learns will not be deemed important terms by traditional IR and the most relevant questions will not be ranked at the top of the list.
Therefore, in our approach, when a question is submitted to the system, candidate questions are fetched using the IR models, then the RQE classifier is applied to filter out the non-entailed questions and re-rank the remaining candidates.
Specifically, we denote INLINEFORM0 the list of question candidates INLINEFORM1 returned by the IR system. The premise question INLINEFORM2 is then used to construct N question pairs INLINEFORM3 . The RQE classifier is then applied to filter out the question pairs that are not entailed and re-rank the remaining pairs.
More precisely, let INLINEFORM0 = INLINEFORM1 in INLINEFORM2 be the list of selected candidate questions that have a positive entailment relation with a given premise question INLINEFORM3 . We rank INLINEFORM4 by computing a hybrid score INLINEFORM5 for each candidate question INLINEFORM6 taking into account the score of the IR system INLINEFORM7 and the score of the RQE system INLINEFORM8 .
For each system INLINEFORM0 INLINEFORM1 , we normalize the associated score by dividing it by the maximum score among the INLINEFORM2 candidate questions retrieved by INLINEFORM3 for INLINEFORM4 :
INLINEFORM0
INLINEFORM0 INLINEFORM1
In our experiments, we fixed the value of INLINEFORM0 to 100. This threshold value was selected as a safe value for this task for the following reasons:
Our collection of 47,457 question-answer pairs was collected from only 12 NIH institutes and is unlikely to contain more than 100 occurrences of the same focus-type pair.
Each question was indexed with additional annotations for the question focus, its synonyms and the question type synonyms.
Evaluating RQE for Medical Question Answering
The objective of this evaluation is to study the effectiveness of RQE for Medical Question Answering, by comparing the answers retrieved by the hybrid entailment-based approach, the IR method and the other QA systems participating to the medical task at TREC 2017 LiveQA challenge (LiveQA-Med).
Evaluation Method
We developed an interface to perform the manual evaluation of the retrieved answers. Figure 5 presents the evaluation interface showing, for each test question, the top-10 answers of the evaluated QA method and the reference answer(s) used by LiveQA assessors to help judging the retrieved answers by the participating systems.
We used the test questions of the medical task at TREC-2017 LiveQA BIBREF11 . These questions are randomly selected from the consumer health questions that the NLM receives daily from all over the world. The test questions cover different medical entities and have a wide list of question types such as Comparison, Diagnosis, Ingredient, Side effects and Tapering.
For a relevant comparison, we used the same judgment scores as the LiveQA Track:
Correct and Complete Answer (4)
Correct but Incomplete (3)
Incorrect but Related (2)
Incorrect (1)
We evaluated the answers returned by the IR-based method and the hybrid QA method (IR+RQE) according to the same reference answers used in LiveQA-Med. The answers were anonymized (the method names were blinded) and presented to 3 assessors: a medical doctor (Assessor A), a medical librarian (B) and a researcher in medical informatics (C). None of the assessors participated in the development of the QA methods. Assessors B and C evaluated 1,000 answers retrieved by each of the methods (IR and IR+RQE). Assessor A evaluated 2,000 answers from both methods.
Table TABREF103 presents the inter-annotator agreement (IAA) through F1 score computed by considering one of the assessors as reference. In the first evaluation, we computed the True Positives (TP) and False Positives (FP) over all ratings and the Precision and F1 score. As there are no negative labels (only true or false positives for each category), Recall is 100%. We also computed a partial IAA by grouping the "Correct and Complete Answer" and "Correct but Incomplete" ratings (as Correct), and the "Incorrect but Related" and "Incorrect" ratings (as Incorrect). The average agreement on distinguishing the Correct and Incorrect answers is 94.33% F1 score. Therefore, we used the evaluations performed by assessor A for both methods. The official results of the TREC LiveQA track relied on one assessor per question as well.
Evaluation of the first retrieved answer
We computed the measures used by TREC LiveQA challenges BIBREF51 , BIBREF11 to evaluate the first retrieved answer for each test question:
avgScore(0-3): the average score over all questions, transferring 1-4 level grades to 0-3 scores. This is the main score used to rank LiveQA runs.
succ@i+: the number of questions with score i or above (i INLINEFORM0 {2..4}) divided by the total number of questions.
prec@i+: the number of questions with score i or above (i INLINEFORM0 {2..4}) divided by number of questions answered by the system.
Table TABREF108 presents the average scores, success and precision results. The hybrid IR+RQE QA system achieved better results than the IR-based system with 0.827 average score. It also achieved a higher score than the best results achieved in the medical challenge at LiveQA'17. Evaluating the RQE system alone is not relevant, as applying RQE on the full collection for each user question is not feasible for a real-time system because of the extended execution time.
Evaluation of the top ten answers
In this evaluation, we used Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR) which are commonly used in QA to evaluate the top-10 answers for each question. We consider answers rated as “Correct and Complete Answer” or “Correct but Incomplete” as correct answers, as the test questions contain multiple subquestions while each answer in our QA collection can cover only one subquestion.
MAP is the mean of the Average Precision (AvgP) scores over all questions.
(1) INLINEFORM0
Q is the number of questions. INLINEFORM0 is the AvgP of the INLINEFORM1 question.
INLINEFORM0
K is the number of correct answers. INLINEFORM0 is the rank of INLINEFORM1 correct answer.
MRR is the average of the reciprocal ranks for each question. The reciprocal rank of a question is the multiplicative inverse of the rank of the first correct answer.
(2) INLINEFORM0
Q is the number of questions. INLINEFORM0 is the rank of the first correct answer for the INLINEFORM1 question.
Table TABREF113 presents the MAP@10 and MRR@10 of our QA methods. The IR+RQE system outperforms the IR-based QA system with 0.311 MAP@10 and 0.333 MRR@10.
Discussion of entailment-based QA for the medical domain
In our evaluation, we followed the same LiveQA guidelines with the highest possible rigor. In particular, we consulted with NIST assessors who provided us with the paraphrases of the test questions that they used to judge the answers. Our IAA on the answers rating was also high compared to related tasks, with an 88.5% F1 agreement with the exact four categories and a 94.3% agreement when reducing the categories to two: “Correct” and “Incorrect” answers. Our results show that RQE improves the overall performance and exceeds the best results in the medical LiveQA'17 challenge by a factor of 29.8%. This performance improvement is particularly interesting as:
Our answer source has only 47K question-answer pairs when LiveQA participating systems relied on much larger collections, including the World Wide Web.
Our system answered one subquestion at most when many LiveQA test questions had several subquestions.
The latter observation, (b), makes the hybrid IR+RQE approach even more promising as it gives it a large potential for the improvement of answer completeness.
The former observation, (a), provides another interesting insight: restricting the answer source to only reliable collections can actually improve the QA performance without losing coverage (i.e., our QA approach provided at least one answer to each test question and obtained the best relevance score).
In another observation, the assessors reported that many of the returned answers had a correct question type but a wrong focus, which indicates that including a focus recognition module to filter such wrong answers can improve further the QA performance in terms of precision. Another aspect that was reported is the repetition of the same (or similar) answer from different websites, which could be addressed by improving answer selection with inter-answer comparisons and removal of near-duplicates. Also, half of the LiveQA test questions are about Drugs, when only two of our resources are specialized in Drugs, among 12 sub-collections overall. Accordingly, the assessors noticed that the performance of the QA systems was better on questions about diseases than on questions about drugs, which suggests a need for extending our medical QA collection with more information about drugs and associated question types.
We also looked closely at the private websites used by the LiveQA-Med annotators to provide some of the reference answers for the test questions. For instance, the ConsumerLab website was useful to answer a question about the ingredients of a Drug (COENZYME Q10). Similarly, the eHealthMe website was used to answer a test question asking about interactions between two drugs (Phentermine and Dicyclomine) when no information was found in DailyMed. eHealthMe provides healthcare big data analysis and private research and studies including self-reported adverse drug effects by patients.
But the question remains on the extent to which such big data and other private websites could be used to automatically answer medical questions if information is otherwise unavailable. Unlike medical professionals, patients do not necessarily have the knowledge and tools to validate such information. An alternative approach could be to put limitations on medical QA systems in terms of the questions that can be answered (e.g. "What is my diagnosis for such symptoms") and build classifiers to detect such questions and warn the users about the dangers of looking for their answers online.
More generally, medical QA systems should follow some strict guidelines regarding the goal and background knowledge and resources of each system in order to protect the consumers from misleading or harmful information. Such guidelines could be based (i) on the source of the information such as health and medical information websites sponsored by the U.S. government, not-for-profit health or medical organizations, and medical university centers, or (ii) on conventions such as the code of conduct of the HON Foundation (HONcode) that addresses the reliability and usefulness of medical information on the Internet.
Our experiments show that limiting the number of answer sources with such guidelines is not only feasible, but it could also enhance the performance of the QA system from an information retrieval perspective.
Conclusion
In this paper, we carried out an empirical study of machine learning and deep learning methods for Recognizing Question Entailment in the medical domain using several datasets. We developed a RQE-based QA system to answer new medical questions using existing question-answer pairs. We built and shared a collection of 47K medical question-answer pairs. Our QA approach outperformed the best results on TREC-2017 LiveQA medical test questions. The proposed approach can be applied and adapted to open-domain as well as specific-domain QA. Deep learning models achieved interesting results on open-domain and clinical datasets, but obtained a lower performance on consumer health questions. We will continue investigating other network architectures including transfer learning, as well as creation of a large collection of consumer health questions for training to improve the performance of DL models. Future work also includes exploring integration of a Question Focus Recognition module to enhance candidate question retrieval, and expanding our question-answer collection.
Acknowledgements
We thank Halil Kilicoglu (NLM/NIH) for his help with the crawling and the manual evaluation and Sonya E. Shooshan (NLM/NIH) for her help with the judgment of the retrieved answers. We also thank Ellen Voorhees (NIST) for her valuable support with the TREC LiveQA evaluation.
We consider the case of the question number 36 in the TREC-2017 LiveQA medical test dataset:
36. congenital diaphragmatic hernia. what are the causes of congenital diaphragmatic hernia? Can cousin marriage cause this? What kind of lung disease the baby might experience life long?
This question was answered by 5 participating runs (vs. 8 runs for other questions), and all submitted answers were wrong (scores of 1 or 2). However, our IR-based QA system retrieved one excellent answer (score 4) and our hybrid IR+RQE system provided 3 excellent answers.
A) TREC 2017 LiveQA-Med Participants' Results:
B) Our IR-based QA System:
C) Our IR+RQE QA System:
|
What machine learning and deep learning methods are used for RQE?
|
Logistic Regression, neural networks
| 7,257
|
qasper
|
8k
|
Introduction
Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 has enabled end-to-end training of a translation system without needing to deal with word alignments, translation rules, and complicated decoding algorithms, which are the characteristics of phrase-based statistical machine translation (PBSMT) BIBREF3 . Although NMT can be significantly better than PBSMT in resource-rich scenarios, PBSMT performs better in low-resource scenarios BIBREF4 . Only by exploiting cross-lingual transfer learning techniques BIBREF5 , BIBREF6 , BIBREF7 , can the NMT performance approach PBSMT performance in low-resource scenarios.
However, such methods usually require an NMT model trained on a resource-rich language pair like French INLINEFORM0 English (parent), which is to be fine-tuned for a low-resource language pair like Uzbek INLINEFORM1 English (child). On the other hand, multilingual approaches BIBREF8 propose to train a single model to translate multiple language pairs. However, these approaches are effective only when the parent target or source language is relatively resource-rich like English (En). Furthermore, the parents and children models should be trained on similar domains; otherwise, one has to take into account an additional problem of domain adaptation BIBREF9 .
In this paper, we work on a linguistically distant and thus challenging language pair Japanese INLINEFORM0 Russian (Ja INLINEFORM1 Ru) which has only 12k lines of news domain parallel corpus and hence is extremely resource-poor. Furthermore, the amount of indirect in-domain parallel corpora, i.e., Ja INLINEFORM2 En and Ru INLINEFORM3 En, are also small. As we demonstrate in Section SECREF4 , this severely limits the performance of prominent low-resource techniques, such as multilingual modeling, back-translation, and pivot-based PBSMT. To remedy this, we propose a novel multistage fine-tuning method for NMT that combines multilingual modeling BIBREF8 and domain adaptation BIBREF9 .
We have addressed two important research questions (RQs) in the context of extremely low-resource machine translation (MT) and our explorations have derived rational contributions (CTs) as follows:
To the best of our knowledge, we are the first to perform such an extensive evaluation of extremely low-resource MT problem and propose a novel multilingual multistage fine-tuning approach involving multilingual modeling and domain adaptation to address it.
Our Japanese–Russian Setting
In this paper, we deal with Ja INLINEFORM0 Ru news translation. This language pair is very challenging because the languages involved have completely different writing system, phonology, morphology, grammar, and syntax. Among various domains, we experimented with translations in the news domain, considering the importance of sharing news between different language speakers. Moreover, news domain is one of the most challenging tasks, due to large presence of out-of-vocabulary (OOV) tokens and long sentences. To establish and evaluate existing methods, we also involved English as the third language. As direct parallel corpora are scarce, involving a language such as English for pivoting is quite common BIBREF10 .
There has been no clean held-out parallel data for Ja INLINEFORM0 Ru and Ja INLINEFORM1 En news translation. Therefore, we manually compiled development and test sets using News Commentary data as a source. Since the given Ja INLINEFORM2 Ru and Ja INLINEFORM3 En data share many lines in the Japanese side, we first compiled tri-text data. Then, from each line, corresponding parts across languages were manually identified, and unaligned parts were split off into a new line. Note that we have never merged two or more lines. As a result, we obtained 1,654 lines of data comprising trilingual, bilingual, and monolingual segments (mainly sentences) as summarized in Table TABREF8 . Finally, for the sake of comparability, we randomly chose 600 trilingual sentences to create a test set, and concatenated the rest of them and bilingual sentences to form development sets.
Our manually aligned development and test sets are publicly available.
Related Work
koehn-knowles:2017:NMT showed that NMT is unable to handle low-resource language pairs as opposed to PBSMT. Transfer learning approaches BIBREF5 , BIBREF6 , BIBREF7 work well when a large helping parallel corpus is available. This restricts one of the source or the target languages to be English which, in our case, is not possible. Approaches involving bi-directional NMT modeling is shown to drastically improve low-resource translation BIBREF11 . However, like most other, this work focuses on translation from and into English.
Remaining options include (a) unsupervised MT BIBREF12 , BIBREF13 , BIBREF14 , (b) parallel sentence mining from non-parallel or comparable corpora BIBREF15 , BIBREF16 , (c) generating pseudo-parallel data BIBREF17 , and (d) MT based on pivot languages BIBREF10 . The linguistic distance between Japanese and Russian makes it extremely difficult to learn bilingual knowledge, such as bilingual lexicons and bilingual word embeddings. Unsupervised MT is thus not promising yet, due to its heavy reliance on accurate bilingual word embeddings. Neither does parallel sentence mining, due to the difficulty of obtaining accurate bilingual lexicons. Pseudo-parallel data can be used to augment existing parallel corpora for training, and previous work has reported that such data generated by so-called back-translation can substantially improve the quality of NMT. However, this approach requires base MT systems that can generate somewhat accurate translations. It is thus infeasible in our scenario, because we can obtain only a weak system which is the consequence of an extremely low-resource situation. MT based on pivot languages requires large in-domain parallel corpora involving the pivot languages. This technique is thus infeasible, because the in-domain parallel corpora for Ja INLINEFORM0 En and Ru INLINEFORM1 En pairs are also extremely limited, whereas there are large parallel corpora in other domains. Section SECREF4 empirically confirms the limit of these existing approaches.
Fortunately, there are two useful transfer learning solutions using NMT: (e) multilingual modeling to incorporate multiple language pairs into a single model BIBREF8 and (f) domain adaptation to incorporate out-of-domain data BIBREF9 . In this paper, we explore a novel method involving step-wise fine-tuning to combine these two methods. By improving the translation quality in this way, we can also increase the likelihood of pseudo-parallel data being useful to further improve translation quality.
Limit of Using only In-domain Data
This section answers our first research question, [RQ1], about the translation quality that we can achieve using existing methods and in-domain parallel and monolingual data. We then use the strongest model to conduct experiments on generating and utilizing back-translated pseudo-parallel data for augmenting NMT. Our intention is to empirically identify the most effective practices as well as recognize the limitations of relying only on in-domain parallel corpora.
Data
To train MT systems among the three languages, i.e., Japanese, Russian, and English, we used all the parallel data provided by Global Voices, more specifically those available at OPUS. Table TABREF9 summarizes the size of train/development/test splits used in our experiments. The number of parallel sentences for Ja INLINEFORM0 Ru is 12k, for Ja INLINEFORM1 En is 47k, and for Ru INLINEFORM2 En is 82k. Note that the three corpora are not mutually exclusive: 9k out of 12k sentences in the Ja INLINEFORM3 Ru corpus were also included in the other two parallel corpora, associated with identical English translations. This puts a limit on the positive impact that the helping corpora can have on the translation quality.
Even when one focuses on low-resource language pairs, we often have access to larger quantities of in-domain monolingual data of each language. Such monolingual data are useful to improve quality of MT, for example, as the source of pseudo-parallel data for augmenting training data for NMT BIBREF17 and as the training data for large and smoothed language models for PBSMT BIBREF4 . Table TABREF13 summarizes the statistics on our monolingual corpora for several domains including the news domain. Note that we removed from the Global Voices monolingual corpora those sentences that are already present in the parallel corpus.
https://dumps.wikimedia.org/backup-index.html (20180501) http://www.statmt.org/wmt18/translation-task.html https://www.yomiuri.co.jp/database/glossary/ http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/ http://opus.nlpl.eu/Tatoeba-v2.php
We tokenized English and Russian sentences using tokenizer.perl of Moses BIBREF3 . To tokenize Japanese sentences, we used MeCab with the IPA dictionary. After tokenization, we eliminated duplicated sentence pairs and sentences with more than 100 tokens for all the languages.
MT Methods Examined
We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .
As for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .
After identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 .
First, we built a PBSMT system for each of the six translation directions. We obtained phrase tables from parallel corpus using SyMGIZA++ with the grow-diag-final heuristics for word alignment, and Moses for phrase pair extraction. Then, we trained a bi-directional MSD (monotone, swap, and discontinuous) lexicalized reordering model. We also trained three 5-gram language models, using KenLM on the following monolingual data: (1) the target side of the parallel data, (2) the concatenation of (1) and the monolingual data from Global Voices, and (3) the concatenation of (1) and all monolingual data in the news domain in Table TABREF13 .
Subsequently, using English as the pivot language, we examined the following three types of pivot-based PBSMT systems BIBREF10 , BIBREF19 for each of Ja INLINEFORM0 Ru and Ru INLINEFORM1 Ja.
2-step decoding using the source-to-English and English-to-target systems.
Obtain a new phrase table from synthetic parallel data generated by translating English side of the target–English training parallel data to the source language with the English-to-source system.
Compile a new phrase table combining those for the source-to-English and English-to-target systems.
Among these three, triangulation is the most computationally expensive method. Although we had filtered the component phrase tables using the statistical significance pruning method BIBREF20 , triangulation can generate an enormous number of phrase pairs. To reduce the computational cost during decoding and the negative effects of potentially noisy phrase pairs, we retained for each source phrase INLINEFORM0 only the INLINEFORM1 -best translations INLINEFORM2 according to the forward translation probability INLINEFORM3 calculated from the conditional probabilities in the component models as defined in utiyama:07. For each of the retained phrase pairs, we also calculated the backward translation probability, INLINEFORM4 , and lexical translation probabilities, INLINEFORM5 and INLINEFORM6 , in the same manner as INLINEFORM7 .
We also investigated the utility of recent advances in unsupervised MT. Even though we began with a publicly available implementation of unsupervised PBSMT BIBREF13 , it crashed due to unknown reasons. We therefore followed another method described in marie:usmt-unmt. Instead of short INLINEFORM0 -grams BIBREF12 , BIBREF13 , we collected a set of phrases in Japanese and Russian from respective monolingual data using the word2phrase algorithm BIBREF21 , as in marie:usmt-unmt. To reduce the complexity, we used randomly selected 10M monolingual sentences, and 300k most frequent phrases made of words among the 300k most frequent words. For each source phrase INLINEFORM1 , we selected 300-best target phrases INLINEFORM2 according to the translation probability as in D18-1549: INLINEFORM3 where INLINEFORM4 stands for a bilingual embedding of a given phrase, obtained through averaging bilingual embeddings of constituent words learned from the two monolingual data using fastText and vecmap. For each of the retained phrase pair, INLINEFORM5 was computed analogously. We also computed lexical translation probabilities relying on those learned from the given small parallel corpus.
Up to four phrase tables were jointly exploited by the multiple decoding path ability of Moses. Weights for the features were tuned using KB-MIRA BIBREF22 on the development set; we took the best weights after 15 iterations. Two hyper-parameters, namely, INLINEFORM0 for the number of pivot-based phrase pairs per source phrase and INLINEFORM1 for distortion limit, were determined by a grid search on INLINEFORM2 and INLINEFORM3 . In contrast, we used predetermined hyper-parameters for phrase table induction from monolingual data, following the convention: 200 for the dimension of word and phrase embeddings and INLINEFORM4 .
We used the open-source implementation of the RNMT and the Transformer models in tensor2tensor. A uni-directional model for each of the six translation directions was trained on the corresponding parallel corpus. Bi-directional and M2M models were realized by adding an artificial token that specifies the target language to the beginning of each source sentence and shuffling the entire training data BIBREF8 .
Table TABREF22 contains some specific hyper-parameters for our baseline NMT models. The hyper-parameters not mentioned in this table used the default values in tensor2tensor. For M2M systems, we over-sampled Ja INLINEFORM0 Ru and Ja INLINEFORM1 En training data so that their sizes match the largest Ru INLINEFORM2 En data. To reduce the number of unknown words, we used tensor2tensor's internal sub-word segmentation mechanism. Since we work in a low-resource setting, we used shared sub-word vocabularies of size 16k for the uni- and bi-directional models and 32k for the M2M models. The number of training iterations was determined by early-stopping: we evaluated our models on the development set every 1,000 updates, and stopped training if BLEU score for the development set was not improved for 10,000 updates (10 check-points). Note that the development set was created by concatenating those for the individual translation directions without any over-sampling.
Having trained the models, we averaged the last 10 check-points and decoded the test sets with a beam size of 4 and a length penalty which was tuned by a linear search on the BLEU score for the development set.
Similarly to PBSMT, we also evaluated “Cascade” and “Synthesize” methods with uni-directional NMT models.
Results
We evaluated MT models using case-sensitive and tokenized BLEU BIBREF23 on test sets, using Moses's multi-bleu.perl. Statistical significance ( INLINEFORM0 ) on the difference of BLEU scores was tested by Moses's bootstrap-hypothesis-difference-significance.pl.
Tables TABREF27 and TABREF31 show BLEU scores of all the models, except the NMT systems augmented with back-translations. Whereas some models achieved reasonable BLEU scores for Ja INLINEFORM0 En and Ru INLINEFORM1 En translation, all the results for Ja INLINEFORM2 Ru, which is our main concern, were abysmal.
Among the NMT models, Transformer models (b INLINEFORM0 ) were proven to be better than RNMT models (a INLINEFORM1 ). RNMT models could not even outperform the uni-directional PBSMT models (c1). M2M models (a3) and (b3) outperformed their corresponding uni- and bi-directional models in most cases. It is worth noting that in this extremely low-resource scenario, BLEU scores of the M2M RNMT model for the largest language pair, i.e., Ru INLINEFORM2 En, were lower than those of the uni- and bi-directional RNMT models as in TACL1081. In contrast, with the M2M Transformer model, Ru INLINEFORM3 En also benefited from multilingualism.
Standard PBSMT models (c1) achieved higher BLEU scores than uni-directional NMT models (a1) and (b1), as reported by koehn-knowles:2017:NMT, whereas they underperform the M2M Transformer NMT model (b3). As shown in Table TABREF31 , pivot-based PBSMT systems always achieved higher BLEU scores than (c1). The best model with three phrase tables, labeled “Synthesize / Triangulate / Gold,” brought visible BLEU gains with substantial reduction of OOV tokens (3047 INLINEFORM0 1180 for Ja INLINEFORM1 Ru, 4463 INLINEFORM2 1812 for Ru INLINEFORM3 Ja). However, further extension with phrase tables induced from monolingual data did not push the limit, despite their high coverage; only 336 and 677 OOV tokens were left for the two translation directions, respectively. This is due to the poor quality of the bilingual word embeddings used to extract the phrase table, as envisaged in Section SECREF3 .
None of pivot-based approaches with uni-directional NMT models could even remotely rival the M2M Transformer NMT model (b3).
Table TABREF46 shows the results of our multistage fine-tuning, where the IDs of each row refer to those described in Section SECREF41 . First of all, the final models of our multistage fine-tuning, i.e., V and VII, achieved significantly higher BLEU scores than (b3) in Table TABREF27 , a weak baseline without using any monolingual data, and #10 in Table TABREF33 , a strong baseline established with monolingual data.
The performance of the initial model (I) depends on the language pair. For Ja INLINEFORM0 Ru pair, it cannot achieve minimum level of quality since the model has never seen parallel data for this pair. The performance on Ja INLINEFORM1 En pair was much lower than the two baseline models, reflecting the crucial mismatch between training and testing domains. In contrast, Ru INLINEFORM2 En pair benefited the most and achieved surprisingly high BLEU scores. The reason might be due to the proximity of out-of-domain training data and in-domain test data.
The first fine-tuning stage significantly pushed up the translation quality for Ja INLINEFORM0 En and Ru INLINEFORM1 En pairs, in both cases with fine-tuning (II) and mixed fine-tuning (III). At this stage, both models performed only poorly for Ja INLINEFORM2 Ru pair as they have not yet seen Ja INLINEFORM3 Ru parallel data. Either model had a consistent advantage to the other.
When these models were further fine-tuned only on the in-domain Ja INLINEFORM0 Ru parallel data (IV and VI), we obtained translations of better quality than the two baselines for Ja INLINEFORM1 Ru pair. However, as a result of complete ignorance of Ja INLINEFORM2 En and Ru INLINEFORM3 En pairs, the models only produced translations of poor quality for these language pairs. In contrast, mixed fine-tuning for the second fine-tuning stage (V and VII) resulted in consistently better models than conventional fine-tuning (IV and VI), irrespective of the choice at the first stage, thanks to the gradual shift of parameters realized by in-domain Ja INLINEFORM4 En and Ru INLINEFORM5 En parallel data. Unfortunately, the translation quality for Ja INLINEFORM6 En and Ru INLINEFORM7 En pairs sometimes degraded from II and III. Nevertheless, the BLEU scores still retain the large margin against two baselines.
The last three rows in Table TABREF46 present BLEU scores obtained by the methods with fewer fine-tuning steps. The most naive model I', trained on the balanced mixture of whole five types of corpora from scratch, and the model II', obtained through a single-step conventional fine-tuning of I on all the in-domain data, achieved only BLEU scores consistently worse than VII. In contrast, when we merged our two fine-tuning steps into a single mixed fine-tuning on I, we obtained a model III' which is better for the Ja INLINEFORM0 Ru pair than VII. Nevertheless, they are still comparable to those of VII and the BLEU scores for the other two language pairs are much lower than VII. As such, we conclude that our multistage fine-tuning leads to a more robust in-domain multilingual model.
Augmentation with Back-translation
Given that the M2M Transformer NMT model (b3) achieved best results for most of the translation directions and competitive results for the rest, we further explored it through back-translation.
We examined the utility of pseudo-parallel data for all the six translation directions, unlike the work of lakew2017improving and lakew2018comparison, which concentrate only on the zero-shot language pair, and the work of W18-2710, which compares only uni- or bi-directional models. We investigated whether each translation direction in M2M models will benefit from pseudo-parallel data and if so, what kind of improvement takes place.
First, we selected sentences to be back-translated from in-domain monolingual data (Table TABREF13 ), relying on the score proposed by moore:intelligent via the following procedure.
For each language, train two 4-gram language models, using KenLM: an in-domain one on all the Global Voices data, i.e., both parallel and monolingual data, and a general-domain one on the concatenation of Global Voices, IWSLT, and Tatoeba data.
For each language, discard sentences containing OOVs according to the in-domain language model.
For each translation direction, select the INLINEFORM0 -best monolingual sentences in the news domain, according to the difference between cross-entropy scores given by the in-domain and general-domain language models.
Whereas W18-2710 exploited monolingual data much larger than parallel data, we maintained a 1:1 ratio between them BIBREF8 , setting INLINEFORM0 to the number of lines of parallel data of given language pair.
Selected monolingual sentences were then translated using the M2M Transformer NMT model (b3) to compose pseudo-parallel data. Then, the pseudo-parallel data were enlarged by over-sampling as summarized in Table TABREF32 . Finally, new NMT models were trained on the concatenation of the original parallel and pseudo-parallel data from scratch in the same manner as the previous NMT models with the same hyper-parameters.
Table TABREF33 shows the BLEU scores achieved by several reasonable combinations of six-way pseudo-parallel data. We observed that the use of all six-way pseudo-parallel data (#10) significantly improved the base model for all the translation directions, except En INLINEFORM0 Ru. A translation direction often benefited when the pseudo-parallel data for that specific direction was used.
Summary
We have evaluated an extensive variation of MT models that rely only on in-domain parallel and monolingual data. However, the resulting BLEU scores for Ja INLINEFORM2 Ru and Ru INLINEFORM3 Ja tasks do not exceed 10 BLEU points, implying the inherent limitation of the in-domain data as well as the difficulty of these translation directions.
Exploiting Large Out-of-Domain Data Involving a Helping Language
The limitation of relying only on in-domain data demonstrated in Section SECREF4 motivates us to explore other types of parallel data. As raised in our second research question, [RQ2], we considered the effective ways to exploit out-of-domain data.
According to language pair and domain, parallel data can be classified into four categories in Table TABREF40 . Among all the categories, out-of-domain data for the language pair of interest have been exploited in the domain adaptation scenarios (C INLINEFORM0 A) BIBREF9 . However, for Ja INLINEFORM1 Ru, no out-of-domain data is available. To exploit out-of-domain parallel data for Ja INLINEFORM2 En and Ru INLINEFORM3 En pairs instead, we propose a multistage fine-tuning method, which combines two types of transfer learning, i.e., domain adaptation for Ja INLINEFORM4 En and Ru INLINEFORM5 En (D INLINEFORM6 B) and multilingual transfer (B INLINEFORM7 A), relying on the M2M model examined in Section SECREF4 . We also examined the utility of fine-tuning for iteratively generating and using pseudo-parallel data.
Multistage Fine-tuning
Simply using NMT systems trained on out-of-domain data for in-domain translation is known to perform badly. In order to effectively use large-scale out-of-domain data for our extremely low-resource task, we propose to perform domain adaptation through either (a) conventional fine-tuning, where an NMT system trained on out-of-domain data is fine-tuned only on in-domain data, or (b) mixed fine-tuning BIBREF9 , where pre-trained out-of-domain NMT system is fine-tuned using a mixture of in-domain and out-of-domain data. The same options are available for transferring from Ja INLINEFORM0 En and Ru INLINEFORM1 En to Ja INLINEFORM2 Ru.
We inevitably involve two types of transfer learning, i.e., domain adaptation for Ja INLINEFORM0 En and Ru INLINEFORM1 En and multilingual transfer for Ja INLINEFORM2 Ru pair. Among several conceivable options for managing these two problems, we examined the following multistage fine-tuning.
Pre-train a multilingual model only on the Ja INLINEFORM0 En and Ru INLINEFORM1 En out-of-domain parallel data (I), where the vocabulary of the model is determined on the basis of the in-domain parallel data in the same manner as the M2M NMT models examined in Section SECREF4 .
Fine-tune the pre-trained model (I) on the in-domain Ja INLINEFORM0 En and Ru INLINEFORM1 En parallel data (fine-tuning, II) or on the mixture of in-domain and out-of-domain Ja INLINEFORM2 En and Ru INLINEFORM3 En parallel data (mixed fine-tuning, III).
Further fine-tune the models (each of II and III) for Ja INLINEFORM0 Ru on in-domain parallel data for this language pair only (fine-tuning, IV and VI) or on all the in-domain parallel data (mixed fine-tuning, V and VII).
We chose this way due to the following two reasons. First, we need to take a balance between several different parallel corpora sizes. The other reason is division of labor; we assume that solving each sub-problem one by one should enable gradual shift of parameters.
Data Selection
As an additional large-scale out-of-domain parallel data for Ja INLINEFORM0 En, we used the cleanest 1.5M sentences from the Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF24 . As for Ru INLINEFORM1 En, we used the UN and Yandex corpora released for the WMT 2018 News Translation Task. We retained Ru INLINEFORM2 En sentence pairs that contain at least one OOV token in both sides, according to the in-domain language model trained in Section SECREF34 . Table TABREF45 summarizes the statistics on the remaining out-of-domain parallel data.
Further Augmentation with Back-translation
Having obtained a better model, we examined again the utility of back-translation. More precisely, we investigated (a) whether the pseudo-parallel data generated by an improved NMT model leads to a further improvement, and (b) whether one more stage of fine-tuning on the mixture of original parallel and pseudo-parallel data will result in a model better than training a new model from scratch as examined in Section SECREF34 .
Given an NMT model, we first generated six-way pseudo-parallel data by translating monolingual data. For the sake of comparability, we used the identical monolingual sentences sampled in Section SECREF34 . Then, we further fine-tuned the given model on the mixture of the generated pseudo-parallel data and the original parallel data, following the same over-sampling procedure in Section SECREF34 . We repeated these steps five times.
Table TABREF51 shows the results. “new #10” in the second row indicates an M2M Transformer model trained from scratch on the mixture of six-way pseudo-parallel data generated by VII and the original parallel data. It achieved higher BLEU scores than #10 in Table TABREF33 thanks to the pseudo-parallel data of better quality, but underperformed the base NMT model VII. In contrast, our fine-tuned model VIII successfully surpassed VII, and one more iteration (IX) further improved BLEU scores for all translation directions, except Ru INLINEFORM0 En. Although further iterations did not necessarily gain BLEU scores, we came to a much higher plateau compared to the results in Section SECREF4 .
Conclusion
In this paper, we challenged the difficult task of Ja INLINEFORM0 Ru news domain translation in an extremely low-resource setting. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data. Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . This paper contains an empirical comparison of several existing approaches and hence we hope that our paper can act as a guideline to researchers attempting to tackle extremely low-resource translation.
In the future, we plan to confirm further fine-tuning for each of specific translation directions. We will also explore the way to exploit out-of-domain pseudo-parallel data, better domain-adaptation approaches, and additional challenging language pairs.
Acknowledgments
This work was carried out when Aizhan Imankulova was taking up an internship at NICT, Japan. We would like to thank the reviewers for their insightful comments. A part of this work was conducted under the program “Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology” of the Ministry of Internal Affairs and Communications (MIC), Japan.
|
what was the baseline?
|
pivot-based translation relying on a helping language BIBREF10, nduction of phrase tables from monolingual data BIBREF14 , attentional RNN-based model (RNMT) BIBREF2, Transformer model BIBREF18, bi-directional model BIBREF11, multi-to-multi (M2M) model BIBREF8, back-translation BIBREF17
| 4,542
|
qasper
|
8k
|
Introduction
BioASQ is a biomedical document classification, document retrieval, and question answering competition, currently in its seventh year. We provide an overview of our submissions to semantic question answering task (7b, Phase B) of BioASQ 7 (except for 'ideal answer' test, in which we did not participate this year). In this task systems are provided with biomedical questions and are required to submit ideal and exact answers to those questions. We have used BioBERT BIBREF0 based system , see also Bidirectional Encoder Representations from Transformers(BERT) BIBREF1, and we fine tuned it for the biomedical question answering task. Our system scored near the top for factoid questions for all the batches of the challenge. More specifially, in the third test batch set, our system achieved highest ‘MRR’ score for Factoid Question Answering task. Also, for List-type question answering task our system achieved highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions and also highlight identified downsides for our current approach and ways to improve them in our future experiments. In last test batch results we placed 4th for List-type questions and 3rd for Factoid-type questions.)
The QA task is organized in two phases. Phase A deals with retrieval of the relevant document, snippets, concepts, and RDF triples, and phase B deals with exact and ideal answer generations (which is a paragraph size summary of snippets). Exact answer generation is required for factoid, list, and yes/no type question.
BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure.
Related Work ::: BioAsq
Sharma et al. BIBREF3 describe a system with two stage process for factoid and list type question answering. Their system extracts relevant entities and then runs supervised classifier to rank the entities. Wiese et al. BIBREF4 propose neural network based model for Factoid and List-type question answering task. The model is based on Fast QA and predicts the answer span in the passage for a given question. The model is trained on SQuAD data set and fine tuned on the BioASQ data. Dimitriadis et al. BIBREF5 proposed two stage process for Factoid question answering task. Their system uses general purpose tools such as Metamap, BeCas to identify candidate sentences. These candidate sentences are represented in the form of features, and are then ranked by the binary classifier. Classifier is trained on candidate sentences extracted from relevant questions, snippets and correct answers from BioASQ challenge. For factoid question answering task highest ‘MRR’ achieved in the 6th edition of BioASQ competition is ‘0.4325’. Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a ‘MRR’ score ‘0.6103’ in one of the test batches for Factoid Question Answering task.
Related Work ::: A minimum background on BERT
BERT stands for "Bidirectional Encoder Representations from Transformers" BIBREF1 is a contextual word embedding model. Given a sentence as an input, contextual embedding for the words are returned. The BERT model was designed so it can be fine tuned for 11 different tasks BIBREF1, including question answering tasks. For a question answering task, question and paragraph (context) are given as an input. A BERT standard is that question text and paragraph text are separated by a separator [Sep]. BERT question-answering fine tuning involves adding softmax layer. Softmax layer takes contextual word embeddings from BERT as input and learns to identity answer span present in the paragraph (context). This process is represented in Figure FIGREF4.
BERT was originally trained to perform tasks such as language model creation using masked words and next-sentence-prediction. In other words BERT weights are learned such that context is used in building the representation of the word, not just as a loss function to help learn a context-independent representation. For detailed understanding of BERT Architecture, please refer to the original BERT paper BIBREF1.
Related Work ::: A minimum background on BERT ::: Comparison of Word Embeddings and Contextual Word Embeddings
A ‘word embedding’ is a learned representation. It is represented in the form of vector where words that have the same meaning have a similar vector representation. Consider a word embedding model 'word2vec' BIBREF6 trained on a corpus. Word embeddings generated from the model are context independent that is, word embeddings are returned regardless of where the words appear in a sentence and regardless of e.g. the sentiment of the sentence. However, contextual word embedding models like BERT also takes context of the word into consideration.
Related Work ::: Comparison of BERT and Bio-BERT
‘BERT’ and BioBERT are very similar in terms of architecture. Difference is that ‘BERT’ is pretrained on Wikipedia articles, whereas BioBERT version used in our experiments is pretrained on Wikipedia, PMC and PubMed articles. Therefore BioBERT model is expected to perform well with biomedical text, in terms of generating contextual word embeddings.
BioBERT model used in our experiments is based on BERT-Base Architecture; BERT-Base has 12 transformer Layers where as BERT-Large has 24 transformer layers. Moreover contextual word embedding vector size is 768 for BERT-Base and more for BERT-large. According to BIBREF1 Bert-Large, fine-tuned on SQuAD 1.1 question answering data BIBREF7 can achieve F1 Score of 90.9 for Question Answering task where as BERT-Base Fine-tuned on the same SQuAD question answering BIBREF7 data could achieve F1 score of 88.5. One downside of the current version BioBERT is that word-piece vocabulary is the same as that of original BERT Model, as a result word-piece vocabulary does not include biomedical jargon. Lee et al. BIBREF0 created BioBERT, using the same pre-trained BERT released by Google, and hence in the word-piece vocabulary (vocab.txt), as a result biomedical jargon is not included in word-piece vocabulary. Modifying word-piece vocabulary (vocab.txt) at this stage would loose original compatibility with ‘BERT’, hence it is left unmodified.
In our future work we would like to build pre-trained ‘BERT’ model from scratch. We would pretrain the model with biomedical corpus (PubMed, ‘PMC’) and Wikipedia. Doing so would give us scope to create word piece vocabulary to include biomedical jargon and there are chances of model performing better with biomedical jargon being included in the word piece vocabulary. We will consider this scenario in the future, or wait for the next version of BioBERT.
Experiments: Factoid Question Answering Task
For Factoid Question Answering task, we fine tuned BioBERT BIBREF0 with question answering data and added new features. Fig. FIGREF4 shows the architecture of BioBERT fine tuned for question answering tasks: Input to BioBERT is word tokenized embeddings for question and the paragraph (Context). As per the ‘BERT’ BIBREF1 standards, tokens ‘[CLS]’ and ‘[SEP]’ are appended to the tokenized input as illustrated in the figure. The resulting model has a softmax layer formed for predicting answer span indices in the given paragraph (Context). On test data, the fine tuned model generates $n$-best predictions for each question. For a question, $n$-best corresponds that $n$ answers are returned as possible answers in the decreasing order of confidence. Variable $n$ is configurable. In our paper, any further mentions of ‘answer returned by the model’ correspond to the top answer returned by the model.
Experiments: Factoid Question Answering Task ::: Setup
BioASQ provides the training data. This data is based on previous BioASQ competitions. Train data we have considered is aggregate of all train data sets till the 5th version of BioASQ competition. We cleaned the data, that is, question-answering data without answers are removed and left with a total count of ‘530’ question answers. The data is split into train and test data in the ratio of 94 to 6; that is, count of '495' for training and '35' for testing.
The original data format is converted to the BERT/BioBERT format, where BioBERT expects ‘start_index’ of the actual answer. The ‘start_index corresponds to the index of the answer text present in the paragraph/ Context. For finding ‘start_index’ we used built-in python function find(). The function returns the lowest index of the actual answer present in the context(paragraph). If the answer is not found ‘-1’ is returned as the index. The efficient way of finding start_index is, if the paragraph (Context) has multiple instances of answer text, then ‘start_index’ of the answer should be that instance of answer text whose context actually matches with what’s been asked in the question.
Example (Question, Answer and Paragraph from BIBREF8):
Question: Which drug should be used as an antidote in benzodiazepine overdose?
Answer: 'Flumazenil'
Paragraph(context):
"Flumazenil use in benzodiazepine overdose in the UK: a retrospective survey of NPIS data. OBJECTIVE: Benzodiazepine (BZD) overdose (OD) continues to cause significant morbidity and mortality in the UK. Flumazenil is an effective antidote but there is a risk of seizures, particularly in those who have co-ingested tricyclic antidepressants. A study was undertaken to examine the frequency of use, safety and efficacy of flumazenil in the management of BZD OD in the UK. METHODS: A 2-year retrospective cohort study was performed of all enquiries to the UK National Poisons Information Service involving BZD OD. RESULTS: Flumazenil was administered to 80 patients in 4504 BZD-related enquiries, 68 of whom did not have ventilatory failure or had recognised contraindications to flumazenil. Factors associated with flumazenil use were increased age, severe poisoning and ventilatory failure. Co-ingestion of tricyclic antidepressants and chronic obstructive pulmonary disease did not influence flumazenil administration. Seizure frequency in patients not treated with flumazenil was 0.3%".
Actual answer is 'Flumazenil', but there are multiple instances of word 'Flu-mazenil'. Efficient way to identify the start-index for 'Flumazenil'(answer) is to find that particular instance of the word 'Flumazenil' which matches the context for the question. In the above example 'Flumazenil' highlighted in bold is the actual instance that matches question's context. Unfortunately, we could not identify readily available tools that can achieve this goal. In our future work, we look forward to handling these scenarios effectively.
Note: The creators of 'SQuAD' BIBREF7 have handled the task of identifying answer's start_index effectively. But 'SQuAD' data set is much more general and does not include biomedical question answering data.
Experiments: Factoid Question Answering Task ::: Training and error analysis
During our training with the BioASQ data, learning rate is set to 3e-5, as mentioned in the BioBERT paper BIBREF0. We started training the model with 495 available train data and 35 test data by setting number of epochs to 50. After training with these hyper-parameters training accuracy(exact match) was 99.3%(overfitting) and testing accuracy is only 4%. In the next iteration we reduced the number of epochs to 25 then training accuracy is reduced to 98.5% and test accuracy moved to 5%. We further reduced number of epochs to 15, and the resulting training accuracy was 70% and test accuracy 15%. In the next iteration set number of epochs to 12 and achieved train accuracy of 57.7% and test accuracy 23.3%. Repeated the experiment with 11 epochs and found training accuracy to be 57.7% and test accuracy to be same 22%. In the next iteration we set number of epochs to '9' and found training accuracy of 48% and test accuracy of 15%. Hence optimum number of epochs is taken as 12 epochs.
During our error analysis we found that on test data, model tends to return text in the beginning of the context(paragraph) as the answer. On analysing train data, we found that there are '120'(out of '495') question answering data instances having start_index:0, meaning 120( 25%) question answering data has first word(s) in the context(paragraph) as the answer. We removed 70% of those instances in order to make train data more balanced. In the new train data set we are left with '411' question answering data instances. This time we got the highest test accuracy of 26% at 11 epochs. We have submitted our results for BioASQ test batch-2, got strict accuracy of 32% and our system stood in 2nd place. Initially, hyper-parameter- 'batch size' is set to ‘400’. Later it is tuned to '32'. Although accuracy(exact answer match) remained at 26%, model generated concise and better answers at batch size ‘32’, that is wrong answers are close to the expected answer in good number of cases.
Example.(from BIBREF8)
Question: Which mutated gene causes Chediak Higashi Syndrome?
Exact Answer: ‘lysosomal trafficking regulator gene’.
The answer returned by a model trained at ‘400’ batch size is ‘Autosomal-recessive complicated spastic paraplegia with a novel lysosomal trafficking regulator’, and from the one trained at ‘32’ batch size is ‘lysosomal trafficking regulator’.
In further experiments, we have fine tuned the BioBERT model with both ‘SQuAD’ dataset (version 2.0) and BioAsq train data. For training on ‘SQuAD’, hyper parameters- Learning rate and number of epochs are set to ‘3e-3’ and ‘3’ respectively as mentioned in the paper BIBREF1. Test accuracy of the model boosted to 44%. In one more experiment we trained model only on ‘SQuAD’ dataset, this time test accuracy of the model moved to 47%. The reason model did not perform up to the mark when trained with ‘SQuAD’ alongside BioASQ data could be that in formatted BioASQ data, start_index for the answer is not accurate, and affected the overall accuracy.
Our Systems and Their Performance on Factoid Questions
We have experimented with several systems and their variations, e.g. created by training with specific additional features (see next subsection). Here is their list and short descriptions. Unfortunately we did not pay attention to naming, and the systems evolved between test batches, so the overall picture can only be understood by looking at the details.
When we started the experiments our objective was to see whether BioBERT and entailment-based techniques can provide value for in the context of biomedical question answering. The answer to both questions was a yes, qualified by many examples clearly showing the limitations of both methods. Therefore we tried to address some of these limitations using feature engineering with mixed results: some clear errors got corrected and new errors got introduced, without overall improvement but convincing us that in future experiments it might be worth trying feature engineering again especially if more training data were available.
Overall we experimented with several approaches with the following aspects of the systems changing between batches, that is being absent or present:
training on BioAsq data vs. training on SQuAD
using the BioAsq snippets for context vs. using the documents from the provided URLs for context
adding or not the LAT, i.e. lexical answer type, feature (see BIBREF9, BIBREF10 and an explanation in the subsection just below).
For Yes/No questions (only) we experimented with the entailment methods.
We will discuss the performance of these models below and in Section 6. But before we do that, let us discuss a feature engineering experiment which eventually produced mixed results, but where we feel it is potentially useful in future experiments.
Our Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative)
During error analysis we found that for some cases, answer being returned by the model is far away from what it is being asked in the Question.
Example: (from BIBREF8)
Question: Hy's law measures failure of which organ?
Actual Answer: ‘Liver’.
The answer returned by one of our models was ‘alanine aminotransferase’, which is an enzyme. The model returns an enzyme, when the question asked for the organ name. To address this type of errors, we decided to try the concepts of ‘Lexical Answer Type’ (LAT) and Focus Word, which was used in IBM Watson, see BIBREF11 for overview; BIBREF10 for technical details, and BIBREF9 for details on question analysis. In an example given in the last source we read:
POETS & POETRY: He was a bank clerk in the Yukon before he published "Songs of a Sourdough" in 1907.
The focus is the part of the question that is a reference to the answer. In the example above, the focus is "he".
LATs are terms in the question that indicate what type of entity is being asked for.
The headword of the focus is generally a LAT, but questions often contain additional LATs, and in the Jeopardy! domain, categories are an additional source of LATs.
(...) In the example, LATs are "he", "clerk", and "poet".
For example in the question "Which plant does oleuropein originate from?" (BIBREF8). The LAT here is ‘plant’. For the BioAsq task we did not need to explicitly distinguish between the focus and the LAT concepts. In this example, the expectation is that answer returned by the model is a plant. Thus it is conceivable that the cosine distance between contextual embedding of word 'plant' in the question and contextual embedding for the answer present in the paragraph(context) is comparatively low. As a result model learns to adjust its weights during training phase and returns answers with low cosine distance with the LAT.
We used Stanford CoreNLP BIBREF12 library to write rules for extracting lexical answer type present in the question, both 'parts of speech'(POS) and dependency parsing functionality was used. We incorporated the Lexical Answer Type into one of our systems, UNCC_QA1 in Batch 4. This system underperformed our system FACTOIDS by about 3% in the MRR measure, but corrected errors such as in the example above.
Our Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative) ::: Assumptions and rules for deriving lexical answer type.
There are different question types: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. Question words are identified through parts of speech tags: 'WDT', 'WRB' ,'WP'. We assumed that LAT is a ‘Noun’ and follows the question word. Often it was also a subject (nsubj). This process is illustrated in Fig.FIGREF15.
LAT computation was governed by a few simple rules, e.g. when a question has multiple words that are 'Subjects’ (and ‘Noun’), a word that is in proximity to the question word is considered as ‘LAT’. These rules are different for each "Wh" word.
Namely, when the word immediately following the question word is a Noun, window size is set to ‘3’. The window size ‘3’ means we iterate through the next ‘3’ words to check if any of the word is both Noun and Subject, If so, such word is considered the ‘LAT’; else the word that is present very next to the question word is considered as the ‘LAT’.
For questions with words ‘Which’ , ‘What’, ‘When’; a Noun immediately following the question word is very often the LAT, e.g. 'enzyme' in Which enzyme is targeted by Evolocumab?. When the word immediately following the question word is not a Noun, e.g. in What is the function of the protein Magt1? the window size is set to ‘5’, and we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as the ‘LAT’; else, the Noun in close proximity to the question word and following it is returned as the ‘LAT’.
For questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself. For the word ‘How', e.g. in How many selenoproteins are encoded in the human genome?, we look at the adjective and if we find one, we take it to be the LAT, otherwise the word 'How' is considered as the ‘LAT’.
Perhaps because of using only very simple rules, the accuracy for ‘LAT’ derivation is 75%; that is, in the remaining 25% of the cases the LAT word is identified incorrectly. Worth noting is that the overall performance the system that used LATs was slightly inferior to the system without LATs, but the types of errors changed. The training used BioBERT with the LAT feature as part of the input string. The errors it introduces usually involve finding the wrong element of the correct type e.g. wrong enzyme when two similar enzymes are described in the text, or 'neuron' when asked about a type of cell with a certain function, when the answer calls for a different cell category, adipocytes, and both are mentioned in the text. We feel with more data and additional tuning or perhaps using an ensemble model, we might be able to keep the correct answers, and improve the results on the confusing examples like the one mentioned above. Therefore if we improve our ‘LAT’ derivation logic, or have larger datasets, then perhaps the neural network techniques they will yield better results.
Our Systems and Their Performance on Factoid Questions ::: Impact of Training using BioAsq data (slightly negative)
Training on BioAsq data in our entry in Batch 1 and Batch 2 under the name QA1 showed it might lead to overfitting. This happened both with (Batch 2) and without (Batch 1) hyperparameters tuning: abysmal 18% MRR in Batch 1, and slighly better one, 40% in Batch 2 (although in Batch 2 it was overall the second best result in MRR but 16% lower than the highest score).
In Batch 3 (only), our UNCC_QA3 system was fine tuned on BioAsq and SQuAD 2.0 BIBREF7, and for data preprocessing Context paragraph is generated from relevant snippets provided in the test data. This system underperformed, by about 2% in MRR, our other entry UNCC_QA1, which was also an overall category winner for this batch. The latter was also trained on SQuAD, but not on BioAsq. We suspect that the reason could be the simplistic nature of the find() function described in Section 3.1. So, this could be an area where a better algorithm for finding the best occurrence of an entity could improve performance.
Our Systems and Their Performance on Factoid Questions ::: Impact of Using Context from URLs (negative)
In some experiments, for context in testing, we used documents for which URL pointers are provided in BioAsq. However, our system UNCC_QA3 underperformed our other system tested only on the provided snippets.
In Batch 5 the underperformance was about 6% of MRR, compared to our best system UNCC_QA1, and by 9% to the top performer.
Performance on Yes/No and List questions
Our work focused on Factoid questions. But we also have done experiments on List-type and Yes/No questions.
Performance on Yes/No and List questions ::: Entailment improves Yes/No accuracy
We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment. Our algorithm was very simple: Given a question we iterate through the candidate sentences and try to find any candidate sentence is contradicting the question (with confidence over 50%), if so 'No' is returned as answer, else 'Yes' is returned. In batch 4 this strategy produced better than the BioAsq baseline performance, and compared to our other systems, the use of entailment increased the performance by about 13% (macro F1 score). We used 'AllenNlp' BIBREF13 entailment library to find entailment of the candidate sentences with question.
Performance on Yes/No and List questions ::: For List-type the URLs have negative impact
Overall, we followed the similar strategy that's been followed for Factoid Question Answering task. We started our experiment with batch 2, where we submitted 20 best answers (with context from snippets). Starting with batch 3, we performed post processing: once models generate answer predictions (n-best predictions), we do post-processing on the predicted answers. In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures.
In the post-processing phase, we take the top ‘20’ (batch 3) and top 5 (batch 4 and 5), predicted answers, tokenize them using common separators: 'comma' , 'and', 'also', 'as well as'. Tokens with characters count more than ‘100’ are eliminated and rest of the tokens are added to the list of possible answers. BioASQ evaluation mechanism does not consider snippets with more than ‘100’ characters as a valid answer. Considering lengthy snippets in to the list of answers would reduce the mean precision score. As a final step, duplicate snippets in the answer pool are removed. For example, consider these top 3 answers predicted by the system (before post-processing):
{
"text": "dendritic cells",
"probability": 0.7554540733426441,
"start_logit": 8.466046333312988,
"end_logit": 9.536355018615723
},
{
"text": "neutrophils, macrophages and
distinct subtypes of dendritic cells",
"probability": 0.13806867348304214,
"start_logit": 6.766478538513184,
"end_logit": 9.536355018615723
},
{
"text": "macrophages and distinct subtypes of dendritic",
"probability": 0.013973475271178242,
"start_logit": 6.766478538513184,
"end_logit": 7.24576473236084
},
After execution of post-processing heuristics, the list of answers returned is as follows:
["dendritic cells"],
["neutrophils"],
["macrophages"],
["distinct subtypes of dendritic cells"]
Summary of our results
The tables below summarize all our results. They show that the performance of our systems was mixed. The simple architectures and algorithm we used worked very well only in Batch 3. However, we feel we can built a better system based on this experience. In particular we observed both the value of contextual embeddings and of feature engineering (LAT), however we failed to combine them properly.
Summary of our results ::: Factoid questions ::: Systems used in Batch 5 experiments
System description for ‘UNCC_QA1’: The system was finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.
System description for ‘QA1’ : ‘LAT’ feature was added and finetuned with SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.
System description for ‘UNCC_QA3’ : Fine tuning process is same as it is done for the system ‘UNCC_QA1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated from the relevant documents for which URLS are included in the test data.
Summary of our results ::: List Questions
For List-type questions, although post processing helped in the later batches, we never managed to obtain competitive precision, although our recall was good.
Summary of our results ::: Yes/No questions
The only thing worth remembering from our performance is that using entailment can have a measurable impact (at least with respect to a weak baseline). The results (weak) are in Table 3.
Discussion, Future Experiments, and Conclusions ::: Summary:
In contrast to 2018, when we submitted BIBREF2 to BioASQ a system based on extractive summarization (and scored very high in the ideal answer category), this year we mainly targeted factoid question answering task and focused on experimenting with BioBERT. After these experiments we see the promise of BioBERT in QA tasks, but we also see its limitations. The latter we tried to address with mixed results using feature engineering. Overall these experiments allowed us to secure a best and a second best score in different test batches. Along with Factoid-type question, we also tried ‘Yes/No’ and ‘List’-type questions, and did reasonably well with our very simple approach.
For Yes/No the moral worth remembering is that reasoning has a potential to influence results, as evidenced by our adding the AllenNLP entailment BIBREF13 system increased its performance.
All our data and software is available at Github, in the previously referenced URL (end of Section 2).
Discussion, Future Experiments, and Conclusions ::: Future experiments
In the current model, we have a shallow neural network with a softmax layer for predicting answer span. Shallow networks however are not good at generalizations. In our future experiments we would like to create dense question answering neural network with a softmax layer for predicting answer span. The main idea is to get contextual word embedding for the words present in the question and paragraph (Context) and feed the contextual word embeddings retrieved from the last layer of BioBERT to the dense question answering network. The mentioned dense layered question answering neural network need to be tuned for finding right hyper parameters. An example of such architecture is shown in Fig.FIGREF30.
In one more experiment, we would like to add a better version of ‘LAT’ contextual word embedding as a feature, along with the actual contextual word embeddings for question text, and Context and feed them as input to the dense question answering neural network. By this experiment, we would like to find if ‘LAT’ feature is improving overall answer prediction accuracy. Adding ‘LAT’ feature this way instead of feeding this word piece embedding directly to the BioBERT (as we did in our above experiments) would not downgrade the quality of contextual word embeddings generated form ‘BioBERT'. Quality contextual word embeddings would lead to efficient transfer learning and chances are that it would improve the model's answer prediction accuracy.
We also see potential for incorporating domain specific inference into the task e.g. using the MedNLI dataset BIBREF14. For all types of experiments it might be worth exploring clinical BERT embeddings BIBREF15, explicitly incorporating domain knowledge (e.g. BIBREF16) and possibly deeper discourse representations (e.g. BIBREF17).
APPENDIX
In this appendix we provide additional details about the implementations.
APPENDIX ::: Systems and their descriptions:
We used several variants of our systems when experimenting with the BioASQ problems. In retrospect, it would be much easier to understand the changes if we adopted some mnemonic conventions in naming the systems. So, we apologize for the names that do not reflect the modifications, and necessitate this list.
APPENDIX ::: Systems and their descriptions: ::: Factoid Type Question Answering:
We preprocessed the test data to convert test data to BioBERT format, We generated Context/paragraph by either aggregating relevant snippets provided or by aggregating documents for which URLS are provided in the BioASQ test data.
APPENDIX ::: Systems and their descriptions: ::: System description for QA1:
We generated Context/paragraph by aggregating relevant snippets available in the test data and mapped it against the question text and question id. We ignored the content present in the documents (document URLS were provided in the original test data). The model is finetuned with BioASQ data.
data preprocessing is done in the same way as it is done for test batch-1. Model fine tuned on BioASQ data.
‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA_1:
System is finetuned on the SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
The System is finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA3:
System is finetuned on the SQuAD 2.0 [reference] and BioASQ dataset[].For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
Fine tuning process is same as it is done for the system ‘UNCC_QA_1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated form from the relevant documents for which URLS are included in the test data.
APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA2:
Fine tuning process is same as for ‘UNCC_QA_1 ’. Difference is Context/paragraph is generated form from the relevant documents for which URLS are included in the test data. System ‘UNCC_QA_1’ got the highest ‘MRR’ score in the 3rd test batch set.
APPENDIX ::: Systems and their descriptions: ::: System description for FACTOIDS:
The System is finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
APPENDIX ::: Systems and their descriptions: ::: List Type Questions:
We attempted List type questions starting from test batch ‘2’. Used similar approach that's been followed for Factoid Question answering task. For all the test batch sets, in the data pre processing phase Context/ paragraph is generated either by aggregating relevant snippets or by aggregating documents(URLS) provided in the BioASQ test data.
For test batch-2, model (System: QA1) is finetuned on BioASQ data and submitted top ‘20’ answers predicted by the model as the list of answers. system ‘QA1’ achieved low F-Measure score:‘0.0786’ in the second test batch. In the further test batches for List type questions, we finetuned the model on Squad data set [reference], implemented post processing techniques (refer section 5.2) and achieved a better F-measure score: ‘0.2862’ in the final test batch set.
In test batch-3 (Systems : ‘QA1’/’’UNCC_QA_1’/’UNCC_QA3’/’UNCC_QA2’) top 20 answers returned by the model is sent for post processing and in test batch 4 and 5 only top 5 answers are sent for post processing. System UNCC_QA2(in batch 3) for List type question answering, Context is generated from documents for which URLS are provided in the BioASQ test data. for the rest of the systems (in test batch-3) for List Type question answering task snippets present in the BioaSQ test data are used to generate context.
In test batch-4 (System : ‘FACTOIDS’/’UNCC_QA_1’/’UNCC_QA3’’) top 5 answers returned by the model is sent for post processing. In case of system ‘FACTOIDS’ snippets in the test data were used to generate context. for systems ’UNCC_QA_1’ and ’UNCC_QA3’ context is generated from the documents for which URLS are provided in the BioASQ test data.
In test batch-5 ( Systems: ‘QA1’/’UNCC_QA_1’/’UNCC_QA3’/’UNCC_QA2’ ) our approach is the same as that of test batch-4 where top 5 answers returned by the model is sent for post processing. for all the systems (in test batch-5) context is generated from the snippets provided in the BioASQ test data.
APPENDIX ::: Systems and their descriptions: ::: Yes/No Type Questions:
For the first 3 test batches, We have submitted answer ‘Yes’ to all the questions. Later, we employed ‘Sentence Entailment’ techniques(refer section 6.0) for the fourth and fifth test batch sets. Our Systems with ‘Sentence Entailment’ approach (for ‘Yes’/ ‘No’ question answering): ‘UNCC_QA_1’(test batch-4), UNCC_QA3(test batch-5).
APPENDIX ::: Additional details for Yes/No Type Questions
We used Textual Entailment in Batch 4 and 5 for ‘Yes’/‘No’ question type. The algorithm was very simple: Given a question we iterate through the candidate sentences, and look for any candidate sentences contradicting the question. If we find one 'No' is returned as answer, else 'Yes' is returned. (The confidence for contradiction was set at 50%) We used AllenNLP BIBREF13 entailment library to find entailment of the candidate sentences with question.
Flow Chart for Yes/No Question answer processing is shown in Fig.FIGREF51
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions
There are different question types, and we distinguished them based on the question words: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. How are question words identified? question words have parts of speech(POS): 'WDT', 'WRB', 'WP'.
Assumptions:
1) Lexical answer type (‘LAT’) or focus word is of type Noun and follows the question word.
2) The LAT word is a Subject. (This clearly not always true, but we used a very simple method). Note: ‘StanfordNLP’ dependency parsing tag for identifying subject is 'nsubj' or 'nsubjpass'.
3) When a question has multiple words that are of type Subject (and Noun), a word that is in proximity to the question word is considered as ‘LAT’.
4) For questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself that is, ‘When’, ‘Who’, ‘Why’ respectively.
Rules and logic flow to traverse a question: The three cases below describe the logic flow of finding LATs. The figures show the grammatical structures used for this purpose.
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-1:
Question with question word ‘How’.
For questions with question word 'How', the adjective that follows the question word is considered as ‘LAT’ (need not follow immediately). If an adjective is absent, word 'How' is considered as ‘LAT’. When there are multiple words that are adjectives, a word in close proximity to the question word and follows it is returned as ‘LAT’. Note: The part of speech tag to identify adjectives is 'JJ'. For Other possible question words like ‘whose’. ‘LAT’/Focus word is question words itself.
Example Question: How many selenoproteins are encoded in the human genome?
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-2:
Questions with question words ‘Which’ , ‘What’ and all other possible question words; a 'Noun' immediately following the question word.
Example Question: Which enzyme is targeted by Evolocumab?
Here, Focus word/LAT is ‘enzyme’ which is both Noun and Subject and immediately follows the question word.
When the word immediately following the question word is a noun, the window size is set to ‘3’. This size ‘3’ means that we iterate through the next ‘3’ words (if present) to check if any of the word is both 'Noun' and 'Subject', If so, the word is considered as ‘LAT’/Focus Word. Else the word that is present very next to the question word is considered as ‘LAT’.
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-3:
Questions with question words ‘Which’ , ‘What’ and all other possible question words; word immediately following the question word is not a 'Noun'.
Example Question: What is the function of the protein Magt1?
Here, Focus word/LAT is ‘function ’ which is both Noun and Subject and does not immediately follow the question word.
When the very next word following the question word is not a Noun, window size is set to ‘5’. Window size ‘5’ corresponds that we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as ‘LAT’. Else, the 'Noun' close proximity to the question word and follows it is returned as ‘LAT’.
Ad we mentioned earlier, the accuracy for ‘LAT’ derivation is 75 percent. But clearly the simple logic described above can be improved, as shown in BIBREF9, BIBREF10. Whether this in turn produces improvements in this particular task is an open question.
APPENDIX ::: Proposing Future Experiments
In the current model, we have a shallow neural network with a softmax layer for predicting answer span. Shallow networks however are not good at generalizations. In our future experiments we would like to create dense question answering neural network with a softmax layer for predicting answer span. The main idea is to get contextual word embedding for the words present in the question and paragraph (Context) and feed the contextual word embeddings retrieved from the last layer of BioBERT to the dense question answering network. The mentioned dense layered question answering Neural network need to be tuned for finding right hyper parameters. An example of such architecture is shown in Fig.FIGREF30.
In another experiment we would like to only feed contextual word embeddings for Focus word/ ‘LAT’, paragraph/ Context as input to the question answering neural network. In this experiment we would neglect all embeddings for the question text except that of Focus word/ ‘LAT’. Our assumption and idea for considering focus word and neglecting remaining words in the question is that during training phase it would make more precise for the model to identify the focus of the question and map answers against the question’s focus. To validate our assumption, we would like to take sample question answering data and find the cosine distance between contextual embedding of Focus word and that of the actual answer and verify if the cosine distance is comparatively low in most of the cases.
In one more experiment, we would like to add a better version of ‘LAT’ contextual word embedding as a feature, along with the actual contextual word embeddings for question text, and Context and feed them as input to the dense question answering neural network. By this experiment, we would like to find if ‘LAT’ feature is improving overall answer prediction accuracy. Adding ‘LAT’ feature this way instead of feeding Focus word’s word piece embedding directly (as we did in our above experiments) to the BioBERT would not downgrade the quality of contextual word embeddings generated form ‘BioBERT'. Quality contextual word embeddings would lead to efficient transfer learning and chances are that it would improve the model's answer prediction accuracy.
|
What was their highest recall score?
|
0.7033
| 6,810
|
qasper
|
8k
|
Introduction
Measures of semantic similarity and relatedness quantify the degree to which two concepts are similar (e.g., INLINEFORM0 – INLINEFORM1 ) or related (e.g., INLINEFORM2 – INLINEFORM3 ). Semantic similarity can be viewed as a special case of semantic relatedness – to be similar is one of many ways that a pair of concepts may be related. The automated discovery of groups of semantically similar or related terms is critical to improving the retrieval BIBREF0 and clustering BIBREF1 of biomedical and clinical documents, and the development of biomedical terminologies and ontologies BIBREF2 .
There is a long history in using distributional methods to discover semantic similarity and relatedness (e.g., BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 ). These methods are all based on the distributional hypothesis, which holds that two terms that are distributionally similar (i.e., used in the same context) will also be semantically similar BIBREF7 , BIBREF8 . Recently word embedding techniques such as word2vec BIBREF9 have become very popular. Despite the prominent role that neural networks play in many of these approaches, at their core they remain distributional techniques that typically start with a word by word co–occurrence matrix, much like many of the more traditional approaches.
However, despite these successes distributional methods do not perform well when data is very sparse (which is common). One possible solution is to use second–order co–occurrence vectors BIBREF10 , BIBREF11 . In this approach the similarity between two words is not strictly based on their co–occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co–occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 . However, while more robust in the face of sparsity, second–order methods can result in significant amounts of noise, where contextual information that is overly general is included and does not contribute to quantifying the semantic relatedness between the two concepts.
Our goal then is to discover methods that automatically reduce the amount of noise in a second–order co–occurrence vector. We achieve this by incorporating pairwise semantic similarity scores derived from a taxonomy into our second–order vectors, and then using these scores to select only the most semantically similar co–occurrences (thereby reducing noise).
We evaluate our method on two datasets that have been annotated in multiple ways. One has been annotated for both similarity and relatedness, and the other has been annotated for relatedness by two different types of experts (medical doctors and medical coders). Our results show that integrating second order co–occurrences with measures of semantic similarity increases correlation with our human reference standards. We also compare our result to a number of other studies which have applied various word embedding methods to the same reference standards we have used. We find that our method often performs at a comparable or higher level than these approaches. These results suggest that our methods of integrating semantic similarity and relatedness values have the potential to improve performance of purely distributional methods.
Similarity and Relatedness Measures
This section describes the similarity and relatedness measures we integrate in our second–order co–occurrence vectors. We use two taxonomies in this study, SNOMED–CT and MeSH. SNOMED–CT (Systematized Nomenclature of Medicine Clinical Terms) is a comprehensive clinical terminology created for the electronic representation of clinical health information. MeSH (Medical Subject Headings) is a taxonomy of biomedical terms developed for indexing biomedical journal articles.
We obtain SNOMED–CT and MeSH via the Unified Medical Language System (UMLS) Metathesaurus (version 2016AA). The Metathesaurus contains approximately 2 million biomedical and clinical concepts from over 150 different terminologies that have been semi–automatically integrated into a single source. Concepts in the Metathesaurus are connected largely by two types of hierarchical relations: INLINEFORM0 / INLINEFORM1 (PAR/CHD) and INLINEFORM2 / INLINEFORM3 (RB/RN).
Similarity Measures
Measures of semantic similarity can be classified into three broad categories : path–based, feature–based and information content (IC). Path–based similarity measures use the structure of a taxonomy to measure similarity – concepts positioned close to each other are more similar than those further apart. Feature–based methods rely on set theoretic measures of overlap between features (union and intersection). The information content measures quantify the amount of information that a concept provides – more specific concepts have a higher amount of information content.
RadaMBB89 introduce the Conceptual Distance measure. This measure is simply the length of the shortest path between two concepts ( INLINEFORM0 and INLINEFORM1 ) in the MeSH hierarchy. Paths are based on broader than (RB) and narrower than (RN) relations. CaviedesC04 extends this measure to use parent (PAR) and child (CHD) relations. Our INLINEFORM2 measure is simply the reciprocal of this shortest path value (Equation EQREF3 ), so that larger values (approaching 1) indicate a high degree of similarity. DISPLAYFORM0
While the simplicity of INLINEFORM0 is appealing, it can be misleading when concepts are at different levels of specificity. Two very general concepts may have the same path length as two very specific concepts. WuP94 introduce a correction to INLINEFORM1 that incorporates the depth of the concepts, and the depth of their Least Common Subsumer (LCS). This is the most specific ancestor two concepts share. In this measure, similarity is twice the depth of the two concept's LCS divided by the product of the depths of the individual concepts (Equation EQREF4 ). Note that if there are multiple LCSs for a pair of concepts, the deepest of them is used in this measure. DISPLAYFORM0
ZhongZLY02 take a very similar approach and again scale the depth of the LCS by the sum of the depths of the two concepts (Equation EQREF5 ), where INLINEFORM0 . The value of INLINEFORM1 was set to 2 based on their recommendations. DISPLAYFORM0
PekarS02 offer another variation on INLINEFORM0 , where the shortest path of the two concepts to the LCS is used, in addition to the shortest bath between the LCS and the root of the taxonomy (Equation EQREF6 ). DISPLAYFORM0
Feature–based methods represent each concept as a set of features and then measure the overlap or sharing of features to measure similarity. In particular, each concept is represented as the set of their ancestors, and similarity is a ratio of the intersection and union of these features.
MaedcheS01 quantify the similarity between two concepts as the ratio of the intersection over their union as shown in Equation EQREF8 . DISPLAYFORM0
BatetSV11 extend this by excluding any shared features (in the numerator) as shown in Equation EQREF9 . DISPLAYFORM0
Information content is formally defined as the negative log of the probability of a concept. The effect of this is to assign rare (low probability) concepts a high measure of information content, since the underlying assumption is that more specific concepts are less frequently used than more common ones.
Resnik95 modified this notion of information content in order to use it as a similarity measure. He defines the similarity of two concepts to be the information content of their LCS (Equation EQREF11 ). DISPLAYFORM0
JiangC97, Lin98, and PirroE10 extend INLINEFORM0 by incorporating the information content of the individual concepts in various different ways. Lin98 defines the similarity between two concepts as the ratio of information content of the LCS with the sum of the individual concept's information content (Equation EQREF12 ). Note that INLINEFORM1 has the same form as INLINEFORM2 and INLINEFORM3 , and is in effect using information content as a measure of specificity (rather than depth). If there is more than one possible LCS, the LCS with the greatest IC is chosen. DISPLAYFORM0
JiangC97 define the distance between two concepts to be the sum of the information content of the two concepts minus twice the information content of the concepts' LCS. We modify this from a distance to a similarity measure by taking the reciprocal of the distance (Equation EQREF13 ). Note that the denominator of INLINEFORM0 is very similar to the numerator of INLINEFORM1 . DISPLAYFORM0
PirroE10 define the similarity between two concepts as the information content of the two concept's LCS divided by the sum of their individual information content values minus the information content of their LCS (Equation EQREF14 ). Note that INLINEFORM0 can be viewed as a set–theoretic version of INLINEFORM1 . DISPLAYFORM0
Information Content
The information content of a concept may be derived from a corpus (corpus–based) or directly from a taxonomy (intrinsic–based). In this work we focus on corpus–based techniques.
For corpus–based information content, we estimate the probability of a concept INLINEFORM0 by taking the sum of the probability of the concept INLINEFORM1 and the probability its descendants INLINEFORM2 (Equation EQREF16 ). DISPLAYFORM0
The initial probabilities of a concept ( INLINEFORM0 ) and its descendants ( INLINEFORM1 ) are obtained by dividing the number of times each concept and descendant occurs in the corpus, and dividing that by the total numbers of concepts ( INLINEFORM2 ).
Ideally the corpus from which we are estimating the probabilities of concepts will be sense–tagged. However, sense–tagging is a challenging problem in its own right, and it is not always possible to carry out reliably on larger amounts of text. In fact in this paper we did not use any sense–tagging of the corpus we derived information content from.
Instead, we estimated the probability of a concept by using the UMLSonMedline dataset. This was created by the National Library of Medicine and consists of concepts from the 2009AB UMLS and the counts of the number of times they occurred in a snapshot of Medline taken on 12 January, 2009. These counts were obtained by using the Essie Search Engine BIBREF14 which queried Medline with normalized strings from the 2009AB MRCONSO table in the UMLS. The frequency of a CUI was obtained by aggregating the frequency counts of the terms associated with the CUI to provide a rough estimate of its frequency. The information content measures then use this information to calculate the probability of a concept.
Another alternative is the use of Intrinsic Information Content. It assess the informativeness of concept based on its placement within a taxonomy by considering the number of incoming (ancestors) relative to outgoing (descendant) links BIBREF15 (Equation EQREF17 ). DISPLAYFORM0
where INLINEFORM0 are the number of descendants of concept INLINEFORM1 that are leaf nodes, INLINEFORM2 are the number of concept INLINEFORM3 's ancestors and INLINEFORM4 are the total number of leaf nodes in the taxonomy.
Relatedness Measures
Lesk86 observed that concepts that are related should share more words in their respective definitions than concepts that are less connected. He was able to perform word sense disambiguation by identifying the senses of words in a sentence with the largest number of overlaps between their definitions. An overlap is the longest sequence of one or more consecutive words that occur in both definitions. BanerjeeP03 extended this idea to WordNet, but observed that WordNet glosses are often very short, and did not contain enough information to distinguish between multiple concepts. Therefore, they created a super–gloss for each concept by adding the glosses of related concepts to the gloss of the concept itself (and then finding overlaps).
PatwardhanP06 adapted this measure to second–order co–occurrence vectors. In this approach, a vector is created for each word in a concept's definition that shows which words co–occur with it in a corpus. These word vectors are averaged to create a single co-occurrence vector for the concept. The similarity between the concepts is calculated by taking the cosine between the concepts second–order vectors. LiuMPMP12 modified and extended this measure to be used to quantify the relatedness between biomedical and clinical terms in the UMLS. The work in this paper can be seen as a further extension of PatwardhanP06 and LiuMPMP12.
Method
In this section, we describe our second–order similarity vector measure. This incorporates both contextual information using the term pair's definition and their pairwise semantic similarity scores derived from a taxonomy. There are two stages to our approach. First, a co–occurrence matrix must be constructed. Second, this matrix is used to construct a second–order co–occurrence vector for each concept in a pair of concepts to be measured for relatedness.
Co–occurrence Matrix Construction
We build an INLINEFORM0 similarity matrix using an external corpus where the rows and columns represent words within the corpus and the element contains the similarity score between the row word and column word using the similarity measures discussed above. If a word maps to more than one possible sense, we use the sense that returns the highest similarity score.
For this paper our external corpus was the NLM 2015 Medline baseline. Medline is a bibliographic database containing over 23 million citations to journal articles in the biomedical domain and is maintained by National Library of Medicine. The 2015 Medline Baseline encompasses approximately 5,600 journals starting from 1948 and contains 23,343,329 citations, of which 2,579,239 contain abstracts. In this work, we use Medline titles and abstracts from 1975 to present day. Prior to 1975, only 2% of the citations contained an abstract. We then calculate the similarity for each bigram in this dataset and include those that have a similarity score greater than a specified threshold on these experiments.
Measure Term Pairs for Relatedness
We obtain definitions for each of the two terms we wish to measure. Due to the sparsity and inconsistencies of the definitions in the UMLS, we not only use the definition of the term (CUI) but also include the definition of its related concepts. This follows the method proposed by PatwardhanP06 for general English and WordNet, and which was adapted for the UMLS and the medical domain by LiuMPMP12. In particular we add the definitions of any concepts connected via a parent (PAR), child (CHD), RB (broader than), RN (narrower than) or TERM (terms associated with CUI) relation. All of the definitions for a term are combined into a single super–gloss. At the end of this process we should have two super–glosses, one for each term to be measured for relatedness.
Next, we process each super–gloss as follows:
We extract a first–order co–occurrence vector for each term in the super–gloss from the co–occurrence matrix created previously.
We take the average of the first order co–occurrence vectors associated with the terms in a super–gloss and use that to represent the meaning of the term. This is a second–order co–occurrence vector.
After a second–order co–occurrence vector has been constructed for each term, then we calculate the cosine between these two vectors to measure the relatedness of the terms.
Data
We use two reference standards to evaluate the semantic similarity and relatedness measures . UMNSRS was annotated for both similarity and relatedness by medical residents. MiniMayoSRS was annotated for relatedness by medical doctors (MD) and medical coders (coder). In this section, we describe these data sets and describe a few of their differences.
MiniMayoSRS: The MayoSRS, developed by PakhomovPMMRC10, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. The relatedness of each term pair was assessed based on a four point scale: (4.0) practically synonymous, (3.0) related, (2.0) marginally related and (1.0) unrelated. MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter–annotator agreement was achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78. We evaluate our method on the mean of the physician scores, and the mean of the coders scores in this subset in the same manner as reported by PedersenPPC07.
UMNSRS: The University of Minnesota Semantic Relatedness Set (UMNSRS) was developed by PakhomovMALPM10, and consists of 725 clinical term pairs whose semantic similarity and relatedness was determined independently by four medical residents from the University of Minnesota Medical School. The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness. The Intraclass Correlation Coefficient (ICC) for the reference standard tagged for similarity was 0.47, and 0.50 for relatedness. Therefore, as suggested by Pakhomov and colleagues,we use a subset of the ratings consisting of 401 pairs for the similarity set and 430 pairs for the relatedness set which each have an ICC of 0.73.
Experimental Framework
We conducted our experiments using the freely available open source software package UMLS::Similarity BIBREF16 version 1.47. This package takes as input two terms (or UMLS concepts) and returns their similarity or relatedness using the measures discussed in Section SECREF2 .
Correlation between the similarity measures and human judgments were estimated using Spearman's Rank Correlation ( INLINEFORM0 ). Spearman's measures the statistical dependence between two variables to assess how well the relationship between the rankings of the variables can be described using a monotonic function. We used Fisher's r-to-z transformation BIBREF17 to calculate the significance between the correlation results.
Results and Discussion
Table TABREF26 shows the Spearman's Rank Correlation between the human scores from the four reference standards and the scores from the various measures of similarity introduced in Section SECREF2 . Each class of measure is followed by the scores obtained when integrating our second order vector approach with these measures of semantic similarity.
Results Comparison
The results for UMNSRS tagged for similarity ( INLINEFORM0 ) and MiniMayoSRS tagged by coders show that all of the second-order similarity vector measures ( INLINEFORM1 ) except for INLINEFORM2 - INLINEFORM3 obtain a higher correlation than the original measures. We found that INLINEFORM4 - INLINEFORM5 and INLINEFORM6 - INLINEFORM7 obtain the highest correlations of all these results with human judgments.
For the UMNSRS dataset tagged for relatedness and MiniMayoSRS tagged by physicians (MD), the original INLINEFORM0 measure obtains a higher correlation than our measure ( INLINEFORM1 ) although the difference is not statistically significant ( INLINEFORM2 ).
In order to analyze and better understand these results, we filtered the bigram pairs used to create the initial similarity matrix based on the strength of their similarity using the INLINEFORM0 and the INLINEFORM1 measures. Note that the INLINEFORM2 measure holds to a 0 to 1 scale, while INLINEFORM3 ranges from 0 to an unspecified upper bound that is dependent on the size of the corpus from which information content is estimated. As such we use a different range of threshold values for each measure. We discuss the results of this filtering below.
Thresholding Experiments
Table TABREF29 shows the results of applying the threshold parameter on each of the reference standards using the INLINEFORM0 measure. For example, a threshold of 0 indicates that all of the bigrams were included in the similarity matrix; and a threshold of 1 indicates that only the bigram pairs with a similarity score greater than one were included.
These results show that using a threshold cutoff of 2 obtains the highest correlation for the UMNSRS dataset, and that a threshold cutoff of 4 obtains the highest correlation for the MiniMayoSRS dataset. All of the results show an increase in correlation with human judgments when incorporating a threshold cutoff over all of the original measures. The increase in the correlation for the UMNSRS tagged for similarity is statistically significant ( INLINEFORM0 ), however this is not the case for the UMNSRS tagged for relatedness nor for the MiniMayoSRS data.
Similarly, Table TABREF30 shows the results of applying the threshold parameter (T) on each of the reference standards using the INLINEFORM0 measure. Although, unlike INLINEFORM1 whose scores are greater than or equal to 0 without an upper limit, the INLINEFORM2 measure returns scores between 0 and 1 (inclusive). Therefore, here a threshold of 0 indicates that all of the bigrams were included in the similarity matrix; and a threshold of INLINEFORM3 indicates that only the bigram pairs with a similarity score greater than INLINEFORM4 were included. The results show an increase in accuracy for all of the datasets except for the MiniMayoSRS tagged for physicians. The increase in the results for the UMNSRS tagged for similarity and the MayoSRS is statistically significant ( INLINEFORM5 ). This is not the case for the UMNSRS tagged for relatedness nor the MiniMayoSRS.
Overall, these results indicate that including only those bigrams that have a sufficiently high similarity score increases the correlation results with human judgments, but what quantifies as sufficiently high varies depending on the dataset and measure.
Comparison with Previous Work
Recently, word embeddings BIBREF9 have become a popular method for measuring semantic relatedness in the biomedical domain. This is a neural network based approach that learns a representation of a word by word co–occurrence matrix. The basic idea is that the neural network learns a series of weights (the hidden layer within the neural network) that either maximizes the probability of a word given its context, referred to as the continuous bag of words (CBOW) approach, or that maximizes the probability of the context given a word, referred to as the Skip–gram approach. These approaches have been used in numerous recent papers.
muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible.
In addition, a previous work very closely related to ours is a retrofitting vector method proposed by YuCBJW16 that incorporates ontological information into a vector representation by including semantically related words. In their measure, they first map a biomedical term to MeSH terms, and second build a word vector based on the documents assigned to the respective MeSH term. They then retrofit the vector by including semantically related words found in the Unified Medical Language System. They evaluate their method on the MiniMayoSRS dataset.
Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith. The results show that integrating semantic similarity measures into second–order co–occurrence vectors obtains a higher or on–par correlation with human judgments as the previous works reported results with the exception of the UMNSRS rel dataset. The results reported by Pakhomov2016corpus and chiu2016how obtain a higher correlation although the results can not be directly compared because both works used different subsets of the term pairs from the UMNSRS dataset.
Conclusion and Future Work
We have presented a method for quantifying the similarity and relatedness between two terms that integrates pair–wise similarity scores into second–order vectors. The goal of this approach is two–fold. First, we restrict the context used by the vector measure to words that exist in the biomedical domain, and second, we apply larger weights to those word pairs that are more similar to each other. Our hypothesis was that this combination would reduce the amount of noise in the vectors and therefore increase their correlation with human judgments. We evaluated our method on datasets that have been manually annotated for relatedness and similarity and found evidence to support this hypothesis. In particular we discovered that guiding the creation of a second–order context vector by selecting term pairs from biomedical text based on their semantic similarity led to improved levels of correlation with human judgment.
We also explored using a threshold cutoff to include only those term pairs that obtained a sufficiently large level of similarity. We found that eliminating less similar pairs improved the overall results (to a point). In the future, we plan to explore metrics to automatically determine the threshold cutoff appropriate for a given dataset and measure. We also plan to explore additional features that can be integrated with a second–order vector measure that will reduce the noise but still provide sufficient information to quantify relatedness. We are particularly interested in approaches that learn word, phrase, and sentence embeddings from structured corpora such as literature BIBREF23 and dictionary entries BIBREF24 . Such embeddings could be integrated into a second–order vector or be used on their own.
Finally, we compared our proposed method to other distributional approaches, focusing on those that used word embeddings. Our results showed that integrating semantic similarity measures into second–order co–occurrence vectors obtains the same or higher correlation with human judgments as do various different word embedding approaches. However, a direct comparison was not possible due to variations in the subsets of the UMNSRS evaluation dataset used. In the future, we would not only like to conduct a direct comparison but also explore integrating semantic similarity into various kinds of word embeddings by training on pair–wise values of semantic similarity as well as co–occurrence statistics.
|
What embedding techniques are explored in the paper?
|
Skip–gram, CBOW
| 4,259
|
qasper
|
8k
|
Introduction
Machine Reading Comprehension (MRC), as the name suggests, requires a machine to read a passage and answer its relevant questions. Since the answer to each question is supposed to stem from the corresponding passage, a common MRC solution is to develop a neural-network-based MRC model that predicts an answer span (i.e. the answer start position and the answer end position) from the passage of each given passage-question pair. To facilitate the explorations and innovations in this area, many MRC datasets have been established, such as SQuAD BIBREF0 , MS MARCO BIBREF1 , and TriviaQA BIBREF2 . Consequently, many pioneering MRC models have been proposed, such as BiDAF BIBREF3 , R-NET BIBREF4 , and QANet BIBREF5 . According to the leader board of SQuAD, the state-of-the-art MRC models have achieved the same performance as human beings. However, does this imply that they have possessed the same reading comprehension ability as human beings?
OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table TABREF1 , such general knowledge is essential to the reading comprehension ability of human beings.
A promising strategy to bridge the gap mentioned above is to integrate the neural networks of MRC models with the general knowledge of human beings. To this end, it is necessary to solve two problems: extracting general knowledge from passage-question pairs and utilizing the extracted general knowledge in the prediction of answer spans. The first problem can be solved with knowledge bases, which store general knowledge in structured forms. A broad variety of knowledge bases are available, such as WordNet BIBREF7 storing semantic knowledge, ConceptNet BIBREF8 storing commonsense knowledge, and Freebase BIBREF9 storing factoid knowledge. In this paper, we limit the scope of general knowledge to inter-word semantic connections, and thus use WordNet as our knowledge base. The existing way to solve the second problem is to encode general knowledge in vector space so that the encoding results can be used to enhance the lexical or contextual representations of words BIBREF10 , BIBREF11 . However, this is an implicit way to utilize general knowledge, since in this way we can neither understand nor control the functioning of general knowledge. In this paper, we discard the existing implicit way and instead explore an explicit (i.e. understandable and controllable) way to utilize general knowledge.
The contribution of this paper is two-fold. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset ( INLINEFORM0 – INLINEFORM1 ) of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise.
Data Enrichment Method
In this section, we elaborate a WordNet-based data enrichment method, which is aimed at extracting inter-word semantic connections from each passage-question pair in our MRC dataset. The extraction is performed in a controllable manner, and the extracted results are provided as general knowledge to our MRC model.
Semantic Relation Chain
WordNet is a lexical database of English, where words are organized into synsets according to their senses. A synset is a set of words expressing the same sense so that a word having multiple senses belongs to multiple synsets, with each synset corresponding to a sense. Synsets are further related to each other through semantic relations. According to the WordNet interface provided by NLTK BIBREF12 , there are totally sixteen types of semantic relations (e.g. hypernyms, hyponyms, holonyms, meronyms, attributes, etc.). Based on synset and semantic relation, we define a new concept: semantic relation chain. A semantic relation chain is a concatenated sequence of semantic relations, which links a synset to another synset. For example, the synset “keratin.n.01” is related to the synset “feather.n.01” through the semantic relation “substance holonym”, the synset “feather.n.01” is related to the synset “bird.n.01” through the semantic relation “part holonym”, and the synset “bird.n.01” is related to the synset “parrot.n.01” through the semantic relation “hyponym”, thus “substance holonym INLINEFORM0 part holonym INLINEFORM1 hyponym” is a semantic relation chain, which links the synset “keratin.n.01” to the synset “parrot.n.01”. We name each semantic relation in a semantic relation chain as a hop, therefore the above semantic relation chain is a 3-hop chain. By the way, each single semantic relation is equivalent to a 1-hop chain.
Inter-word Semantic Connection
The key problem in the data enrichment method is determining whether a word is semantically connected to another word. If so, we say that there exists an inter-word semantic connection between them. To solve this problem, we define another new concept: the extended synsets of a word. Given a word INLINEFORM0 , whose synsets are represented as a set INLINEFORM1 , we use another set INLINEFORM2 to represent its extended synsets, which includes all the synsets that are in INLINEFORM3 or that can be linked to from INLINEFORM4 through semantic relation chains. Theoretically, if there is no limitation on semantic relation chains, INLINEFORM5 will include all the synsets in WordNet, which is meaningless in most situations. Therefore, we use a hyper-parameter INLINEFORM6 to represent the permitted maximum hop count of semantic relation chains. That is to say, only the chains having no more than INLINEFORM7 hops can be used to construct INLINEFORM8 so that INLINEFORM9 becomes a function of INLINEFORM10 : INLINEFORM11 (if INLINEFORM12 , we will have INLINEFORM13 ). Based on the above statements, we formulate a heuristic rule for determining inter-word semantic connections: a word INLINEFORM14 is semantically connected to another word INLINEFORM15 if and only if INLINEFORM16 .
General Knowledge Extraction
Given a passage-question pair, the inter-word semantic connections that connect any word to any passage word are regarded as the general knowledge we need to extract. Considering the requirements of our MRC model, we only extract the positional information of such inter-word semantic connections. Specifically, for each word INLINEFORM0 , we extract a set INLINEFORM1 , which includes the positions of the passage words that INLINEFORM2 is semantically connected to (if INLINEFORM3 itself is a passage word, we will exclude its own position from INLINEFORM4 ). We can control the amount of the extracted results by setting the hyper-parameter INLINEFORM5 : if we set INLINEFORM6 to 0, inter-word semantic connections will only exist between synonyms; if we increase INLINEFORM7 , inter-word semantic connections will exist between more words. That is to say, by increasing INLINEFORM8 within a certain range, we can usually extract more inter-word semantic connections from a passage-question pair, and thus can provide the MRC model with more general knowledge. However, due to the complexity and diversity of natural languages, only a part of the extracted results can serve as useful general knowledge, while the rest of them are useless for the prediction of answer spans, and the proportion of the useless part always rises when INLINEFORM9 is set larger. Therefore we set INLINEFORM10 through cross validation (i.e. according to the performance of the MRC model on the development examples).
Knowledge Aided Reader
In this section, we elaborate our MRC model: Knowledge Aided Reader (KAR). The key components of most existing MRC models are their attention mechanisms BIBREF13 , which are aimed at fusing the associated representations of each given passage-question pair. These attention mechanisms generally fall into two categories: the first one, which we name as mutual attention, is aimed at fusing the question representations into the passage representations so as to obtain the question-aware passage representations; the second one, which we name as self attention, is aimed at fusing the question-aware passage representations into themselves so as to obtain the final passage representations. Although KAR is equipped with both categories, its most remarkable feature is that it explicitly uses the general knowledge extracted by the data enrichment method to assist its attention mechanisms. Therefore we separately name the attention mechanisms of KAR as knowledge aided mutual attention and knowledge aided self attention.
Task Definition
Given a passage INLINEFORM0 and a relevant question INLINEFORM1 , the task is to predict an answer span INLINEFORM2 , where INLINEFORM3 , so that the resulting subsequence INLINEFORM4 from INLINEFORM5 is an answer to INLINEFORM6 .
Overall Architecture
As shown in Figure FIGREF7 , KAR is an end-to-end MRC model consisting of five layers:
Lexicon Embedding Layer. This layer maps the words to the lexicon embeddings. The lexicon embedding of each word is composed of its word embedding and character embedding. For each word, we use the pre-trained GloVe BIBREF14 word vector as its word embedding, and obtain its character embedding with a Convolutional Neural Network (CNN) BIBREF15 . For both the passage and the question, we pass the concatenation of the word embeddings and the character embeddings through a shared dense layer with ReLU activation, whose output dimensionality is INLINEFORM0 . Therefore we obtain the passage lexicon embeddings INLINEFORM1 and the question lexicon embeddings INLINEFORM2 .
Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings. For both the passage and the question, we process the lexicon embeddings (i.e. INLINEFORM0 for the passage and INLINEFORM1 for the question) with a shared bidirectional LSTM (BiLSTM) BIBREF16 , whose hidden state dimensionality is INLINEFORM2 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the passage context embeddings INLINEFORM3 and the question context embeddings INLINEFORM4 .
Coarse Memory Layer. This layer maps the context embeddings to the coarse memories. First we use knowledge aided mutual attention (introduced later) to fuse INLINEFORM0 into INLINEFORM1 , the outputs of which are represented as INLINEFORM2 . Then we process INLINEFORM3 with a BiLSTM, whose hidden state dimensionality is INLINEFORM4 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the coarse memories INLINEFORM5 , which are the question-aware passage representations.
Refined Memory Layer. This layer maps the coarse memories to the refined memories. First we use knowledge aided self attention (introduced later) to fuse INLINEFORM0 into themselves, the outputs of which are represented as INLINEFORM1 . Then we process INLINEFORM2 with a BiLSTM, whose hidden state dimensionality is INLINEFORM3 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the refined memories INLINEFORM4 , which are the final passage representations.
Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers. First we obtain the answer start position distribution INLINEFORM0 : INLINEFORM1 INLINEFORM2
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the refined memory of each passage word INLINEFORM4 (i.e. the INLINEFORM5 -th column in INLINEFORM6 ); INLINEFORM7 represents the question summary obtained by performing an attention pooling over INLINEFORM8 . Then we obtain the answer end position distribution INLINEFORM9 : INLINEFORM10 INLINEFORM11
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents vector concatenation. Finally we construct an answer span prediction matrix INLINEFORM4 , where INLINEFORM5 represents the upper triangular matrix of a matrix INLINEFORM6 . Therefore, for the training, we minimize INLINEFORM7 on each training example whose labeled answer span is INLINEFORM8 ; for the inference, we separately take the row index and column index of the maximum element in INLINEFORM9 as INLINEFORM10 and INLINEFORM11 .
Knowledge Aided Mutual Attention
As a part of the coarse memory layer, knowledge aided mutual attention is aimed at fusing the question context embeddings INLINEFORM0 into the passage context embeddings INLINEFORM1 , where the key problem is to calculate the similarity between each passage context embedding INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) and each question context embedding INLINEFORM5 (i.e. the INLINEFORM6 -th column in INLINEFORM7 ). To solve this problem, BIBREF3 proposed a similarity function: INLINEFORM8
where INLINEFORM0 is a trainable parameter; INLINEFORM1 represents element-wise multiplication. This similarity function has also been adopted by several other works BIBREF17 , BIBREF5 . However, since context embeddings contain high-level information, we believe that introducing the pre-extracted general knowledge into the calculation of such similarities will make the results more reasonable. Therefore we modify the above similarity function to the following form: INLINEFORM2
where INLINEFORM0 represents the enhanced context embedding of a word INLINEFORM1 . We use the pre-extracted general knowledge to construct the enhanced context embeddings. Specifically, for each word INLINEFORM2 , whose context embedding is INLINEFORM3 , to construct its enhanced context embedding INLINEFORM4 , first recall that we have extracted a set INLINEFORM5 , which includes the positions of the passage words that INLINEFORM6 is semantically connected to, thus by gathering the columns in INLINEFORM7 whose indexes are given by INLINEFORM8 , we obtain the matching context embeddings INLINEFORM9 . Then by constructing a INLINEFORM10 -attended summary of INLINEFORM11 , we obtain the matching vector INLINEFORM12 (if INLINEFORM13 , which makes INLINEFORM14 , we will set INLINEFORM15 ): INLINEFORM16 INLINEFORM17
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the INLINEFORM4 -th column in INLINEFORM5 . Finally we pass the concatenation of INLINEFORM6 and INLINEFORM7 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM8 . Therefore we obtain the enhanced context embedding INLINEFORM9 .
Based on the modified similarity function and the enhanced context embeddings, to perform knowledge aided mutual attention, first we construct a knowledge aided similarity matrix INLINEFORM0 , where each element INLINEFORM1 . Then following BIBREF5 , we construct the passage-attended question summaries INLINEFORM2 and the question-attended passage summaries INLINEFORM3 : INLINEFORM4 INLINEFORM5
where INLINEFORM0 represents softmax along the row dimension and INLINEFORM1 along the column dimension. Finally following BIBREF17 , we pass the concatenation of INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM6 . Therefore we obtain the outputs INLINEFORM7 .
Knowledge Aided Self Attention
As a part of the refined memory layer, knowledge aided self attention is aimed at fusing the coarse memories INLINEFORM0 into themselves. If we simply follow the self attentions of other works BIBREF4 , BIBREF18 , BIBREF19 , BIBREF17 , then for each passage word INLINEFORM1 , we should fuse its coarse memory INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) with the coarse memories of all the other passage words. However, we believe that this is both unnecessary and distracting, since each passage word has nothing to do with many of the other passage words. Thus we use the pre-extracted general knowledge to guarantee that the fusion of coarse memories for each passage word will only involve a precise subset of the other passage words. Specifically, for each passage word INLINEFORM5 , whose coarse memory is INLINEFORM6 , to perform the fusion of coarse memories, first recall that we have extracted a set INLINEFORM7 , which includes the positions of the other passage words that INLINEFORM8 is semantically connected to, thus by gathering the columns in INLINEFORM9 whose indexes are given by INLINEFORM10 , we obtain the matching coarse memories INLINEFORM11 . Then by constructing a INLINEFORM12 -attended summary of INLINEFORM13 , we obtain the matching vector INLINEFORM14 (if INLINEFORM15 , which makes INLINEFORM16 , we will set INLINEFORM17 ): INLINEFORM18 INLINEFORM19
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. Finally we pass the concatenation of INLINEFORM3 and INLINEFORM4 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM5 . Therefore we obtain the fusion result INLINEFORM6 , and further the outputs INLINEFORM7 .
Related Works
Attention Mechanisms. Besides those mentioned above, other interesting attention mechanisms include performing multi-round alignment to avoid the problems of attention redundancy and attention deficiency BIBREF20 , and using mutual attention as a skip-connector to densely connect pairwise layers BIBREF21 .
Data Augmentation. It is proved that properly augmenting training examples can improve the performance of MRC models. For example, BIBREF22 trained a generative model to generate questions based on unlabeled text, which substantially boosted their performance; BIBREF5 trained a back-and-forth translation model to paraphrase training examples, which brought them a significant performance gain.
Multi-step Reasoning. Inspired by the fact that human beings are capable of understanding complex documents by reading them over and over again, multi-step reasoning was proposed to better deal with difficult MRC tasks. For example, BIBREF23 used reinforcement learning to dynamically determine the number of reasoning steps; BIBREF19 fixed the number of reasoning steps, but used stochastic dropout in the output layer to avoid step bias.
Linguistic Embeddings. It is both easy and effective to incorporate linguistic embeddings into the input layer of MRC models. For example, BIBREF24 and BIBREF19 used POS embeddings and NER embeddings to construct their input embeddings; BIBREF25 used structural embeddings based on parsing trees to constructed their input embeddings.
Transfer Learning. Several recent breakthroughs in MRC benefit from feature-based transfer learning BIBREF26 , BIBREF27 and fine-tuning-based transfer learning BIBREF28 , BIBREF29 , which are based on certain word-level or sentence-level models pre-trained on large external corpora in certain supervised or unsupervised manners.
Experimental Settings
MRC Dataset. The MRC dataset used in this paper is SQuAD 1.1, which contains over INLINEFORM0 passage-question pairs and has been randomly partitioned into three parts: a training set ( INLINEFORM1 ), a development set ( INLINEFORM2 ), and a test set ( INLINEFORM3 ). Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. Specifically, each passage in AddSent contains several sentences that are similar to the question but not contradictory to the answer, while each passage in AddOneSent contains a human-approved random sentence that may be unrelated to the passage.
Implementation Details. We tokenize the MRC dataset with spaCy 2.0.13 BIBREF30 , manipulate WordNet 3.0 with NLTK 3.3, and implement KAR with TensorFlow 1.11.0 BIBREF31 . For the data enrichment method, we set the hyper-parameter INLINEFORM0 to 3. For the dense layers and the BiLSTMs, we set the dimensionality unit INLINEFORM1 to 600. For model optimization, we apply the Adam BIBREF32 optimizer with a learning rate of INLINEFORM2 and a mini-batch size of 32. For model evaluation, we use Exact Match (EM) and F1 score as evaluation metrics. To avoid overfitting, we apply dropout BIBREF33 to the dense layers and the BiLSTMs with a dropout rate of INLINEFORM3 . To boost the performance, we apply exponential moving average with a decay rate of INLINEFORM4 .
Model Comparison in both Performance and the Robustness to Noise
We compare KAR with other MRC models in both performance and the robustness to noise. Specifically, we not only evaluate the performance of KAR on the development set and the test set, but also do this on the adversarial sets. As for the comparative objects, we only consider the single MRC models that rank in the top 20 on the SQuAD 1.1 leader board and have reported their performance on the adversarial sets. There are totally five such comparative objects, which can be considered as representatives of the state-of-the-art MRC models. As shown in Table TABREF12 , on the development set and the test set, the performance of KAR is on par with that of the state-of-the-art MRC models; on the adversarial sets, KAR outperforms the state-of-the-art MRC models by a large margin. That is to say, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them.
To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by BIBREF3 and the self attention proposed by BIBREF4 separately, and find that the F1 score of KAR drops by INLINEFORM2 on the development set, INLINEFORM3 on AddSent, and INLINEFORM4 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of INLINEFORM5 and an F1 score of INLINEFORM6 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: INLINEFORM7 / INLINEFORM8 ) BIBREF36 and BiDAF (EM / F1: INLINEFORM9 / INLINEFORM10 ) BIBREF3 . The above empirical findings imply that general knowledge indeed plays an effective role in KAR.
To demonstrate the advantage of our explicit way to utilize general knowledge over the existing implicit way, we compare the performance of KAR with that reported by BIBREF10 , which used an encoding-based method to utilize the general knowledge dynamically retrieved from Wikipedia and ConceptNet. Since their best model only achieved an EM of INLINEFORM0 and an F1 score of INLINEFORM1 on the development set, which is much lower than the performance of KAR, we have good reason to believe that our explicit way works better than the existing implicit way.
Model Comparison in the Hunger for Data
We compare KAR with other MRC models in the hunger for data. Specifically, instead of using all the training examples, we produce several training subsets (i.e. subsets of the training examples) so as to study the relationship between the proportion of the available training examples and the performance. We produce each training subset by sampling a specific number of questions from all the questions relevant to each passage. By separately sampling 1, 2, 3, and 4 questions on each passage, we obtain four training subsets, which separately contain INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 of the training examples. As shown in Figure FIGREF15 , with KAR, SAN (re-implemented), and QANet (re-implemented without data augmentation) trained on these training subsets, we evaluate their performance on the development set, and find that KAR performs much better than SAN and QANet. As shown in Figure FIGREF16 and Figure FIGREF17 , with the above KAR, SAN, and QANet trained on the same training subsets, we also evaluate their performance on the adversarial sets, and still find that KAR performs much better than SAN and QANet. That is to say, when only a subset of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise.
Analysis
According to the experimental results, KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. The reasons for these achievements, we believe, are as follows:
Conclusion
In this paper, we innovatively integrate the neural networks of MRC models with the general knowledge of human beings. Specifically, inter-word semantic connections are first extracted from each given passage-question pair by a WordNet-based data enrichment method, and then provided as general knowledge to an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the general knowledge to assist its attention mechanisms. Experimental results show that KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. In the future, we plan to use some larger knowledge bases, such as ConceptNet and Freebase, to improve the quality and scope of the general knowledge.
Acknowledgments
This work is partially supported by a research donation from iFLYTEK Co., Ltd., Hefei, China, and a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada.
|
Do the authors hypothesize that humans' robustness to noise is due to their general knowledge?
|
Yes
| 4,127
|
qasper
|
8k
|
Introduction
Assembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-quality annotations. A practical approach for collecting a sufficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk). However, crowd workers in general are likely to provide noisy annotations BIBREF0 , BIBREF1 , BIBREF2 , an issue exacerbated by the technical nature of specialized content. Some of this noise may reflect worker quality and can be modeled BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 , but for some instances lay people may simply lack the domain knowledge to provide useful annotation.
In this paper we report experiments on the EBM-NLP corpus comprising crowdsourced annotations of medical literature BIBREF5 . We operationalize the concept of annotation difficulty and show how it can be exploited during training to improve information extraction models. We then obtain expert annotations for the abstracts predicted to be most difficult, as well as for a similar number of randomly selected abstracts. The annotation of highly specialized data and the use of lay and expert annotators allow us to examine the following key questions related to lay and expert annotations in specialized domains:
Can we predict item difficulty? We define a training instance as difficult if a lay annotator or an automated model disagree on its labeling. We show that difficulty can be predicted, and that it is distinct from inter-annotator agreement. Further, such predictions can be used during training to improve information extraction models.
Are there systematic differences between expert and lay annotations? We observe decidedly lower agreement between lay workers as compared to domain experts. Lay annotations have high precision but low recall with respect to expert annotations in the new data that we collected. More generally, we expect lay annotations to be lower quality, which may translate to lower precision, recall, or both, compared to expert annotations. Can one rely solely on lay annotations? Reasonable models can be trained using lay annotations alone, but similar performance can be achieved using markedly less expert data. This suggests that the optimal ratio of expert to crowd annotations for specialized tasks will depend on the cost and availability of domain experts. Expert annotations are preferable whenever its collection is practical. But in real-world settings, a combination of expert and lay annotations is better than using lay data alone.
Does it matter what data is annotated by experts? We demonstrate that a system trained on combined data achieves better predictive performance when experts annotate difficult examples rather than instances selected at i.i.d. random.
Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here.
Related Work
Crowdsourcing annotation is now a well-studied problem BIBREF7 , BIBREF0 , BIBREF1 , BIBREF2 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 .
There are also several surveys of crowdsourcing in biomedicine specifically BIBREF8 , BIBREF9 , BIBREF10 . Some work in this space has contrasted model performance achieved using expert vs. crowd annotated training data BIBREF11 , BIBREF12 , BIBREF13 . Dumitrache et al. Dumitrache:2018:CGT:3232718.3152889 concluded that performance is similar under these supervision types, finding no clear advantage from using expert annotators. This differs from our findings, perhaps owing to differences in design. The experts we used already hold advanced medical degrees, for instance, while those in prior work were medical students. Furthermore, the task considered here would appear to be of greater difficulty: even a system trained on $\sim $ 5k instances performs reasonably, but far from perfect. By contrast, in some of the prior work where experts and crowd annotations were deemed equivalent, a classifier trained on 300 examples can achieve very high accuracy BIBREF12 .
More relevant to this paper, prior work has investigated methods for `task routing' in active learning scenarios in which supervision is provided by heterogeneous labelers with varying levels of expertise BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 . The related question of whether effort is better spent collecting additional annotations for already labeled (but potentially noisily so) examples or novel instances has also been addressed BIBREF18 . What distinguishes the work here is our focus on providing an operational definition of instance difficulty, showing that this can be predicted, and then using this to inform task routing.
Application Domain
Our specific application concerns annotating abstracts of articles that describe the conduct and results of randomized controlled trials (RCTs). Experimentation in this domain has become easy with the recent release of the EBM-NLP BIBREF5 corpus, which includes a reasonably large training dataset annotated via crowdsourcing, and a modest test set labeled by individuals with advanced medical training. More specifically, the training set comprises 4,741 medical article abstracts with crowdsourced annotations indicating snippets (sequences) that describe the Participants (p), Interventions (i), and Outcome (o) elements of the respective RCT, and the test set is composed of 191 abstracts with p, i, o sequence annotations from three medical experts.
Table 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon.
An abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively.
Quantifying Task Difficulty
The test set includes annotations from both crowd workers and domain experts. We treat the latter as ground truth and then define the difficulty of sentences in terms of the observed agreement between expert and lay annotators. Formally, for annotation task $t$ and instance $i$ :
$$\text{Difficulty}_{ti} = \frac{\sum _{j=1}^n{f(\text{label}_{ij}, y_i})}{n}$$ (Eq. 3)
where $f$ is a scoring function that measures the quality of the label from worker $j$ for sentence $i$ , as compared to a ground truth annotation, $y_i$ . The difficulty score of sentence $i$ is taken as an average over the scores for all $n$ layworkers. We use Spearmans' correlation coefficient as a scoring function. Specifically, for each sentence we create two vectors comprising counts of how many times each token was annotated by crowd and expert workers, respectively, and calculate the correlation between these. Sentences with no labels are treated as maximally easy; those with only either crowd worker or expert label(s) are assumed maximally difficult.
The training set contains only crowdsourced annotations. To label the training data, we use a 10-fold validation like setting. We iteratively retrain the LSTM-CRF-Pattern sequence tagger of Patel et al. patel2018syntactic on 9 folds of the training data and use that trained model to predict labels for the 10th. In this way we obtain predictions on the full training set. We then use predicted spans as proxy `ground truth' annotations to calculate the difficulty score of sentences as described above; we normalize these to the [ $0, 1$ ] interval. We validate this approximation by comparing the proxy scores against reference scores over the test set, the Pearson's correlation coefficients are 0.57 for Population, 0.71 for Intervention and 0.68 for Outcome.
There exist many sentences that contain neither manual nor predicted annotations. We treat these as maximally easy sentences (with difficulty scores of 0). Such sentences comprise 51%, 42% and 36% for Population, Interventions and Outcomes data respectively, indicating that it is easier to identify sentences that have no Population spans, but harder to identify sentences that have no Interventions or Outcomes spans. This is intuitive as descriptions of the latter two tend to be more technical and dense with medical jargon.
We show the distribution of the automatically labeled scores for sentences that do contain spans in Figure 1 . The mean of the Population (p) sentence scores is significantly lower than that for other types of sentences (i and o), again indicating that they are easier on average to annotate. This aligns with a previous finding that annotating Interventions and Outcomes is more difficult than annotating Participants BIBREF5 .
Many sentences contain spans tagged by the LSTM-CRF-Pattern model, but missed by all crowd workers, resulting in a maximally difficult score (1). Inspection of such sentences revealed that some are truly difficult examples, but others are tagging model errors. In either case, such sentences have confused workers and/or the model, and so we retain them all as `difficult' sentences.
Content describing the p, i and o, respectively, is quite different. As such, one sentence usually contains (at most) only one of these three content types. We thus treat difficulty prediction for the respective label types as separate tasks.
Difficulty is not Worker Agreement
Our definition of difficulty is derived from agreement between expert and crowd annotations for the test data, and agreement between a predictive model and crowd annotations in the training data. It is reasonable to ask if these measures are related to inter-annotator agreement, a metric often used in language technology research to identify ambiguous or difficult items. Here we explicitly verify that our definition of difficulty only weakly correlates with inter-annotator agreement.
We calculate inter-worker agreement between crowd and expert annotators using Spearman's correlation coefficient. As shown in Table 2 , average agreement between domain experts are considerably higher than agreements between crowd workers for all three label types. This is a clear indication that the crowd annotations are noisier.
Furthermore, we compare the correlation between inter-annotator agreement and difficulty scores in the training data. Given that the majority of sentences do not contain a PICO span, we only include in these calculations those that contain a reference label. Pearson's r are 0.34, 0.30 and 0.31 for p, i and o, respectively, confirming that inter-worker agreement and our proposed difficulty score are quite distinct.
Predicting Annotation Difficulty
We treat difficulty prediction as a regression problem, and propose and evaluate neural model variants for the task. We first train RNN BIBREF19 and CNN BIBREF20 models.
We also use the universal sentence encoder (USE) BIBREF6 to induce sentence representations, and train a model using these as features. Following BIBREF6 , we then experiment with an ensemble model that combines the `universal' and task-specific representations to predict annotation difficulty. We expect these universal embeddings to capture general, high-level semantics, and the task specific representations to capture more granular information. Figure 2 depicts the model architecture. Sentences are fed into both the universal sentence encoder and, separately, a task specific neural encoder, yielding two representations. We concatenate these and pass the combined vector to the regression layer.
Experimental Setup and Results
We trained models for each label type separately. Word embeddings were initialized to 300d GloVe vectors BIBREF21 trained on common crawl data; these are fine-tuned during training. We used the Adam optimizer BIBREF22 with learning rate and decay set to 0.001 and 0.99, respectively. We used batch sizes of 16.
We used the large version of the universal sentence encoder with a transformer BIBREF23 . We did not update the pretrained sentence encoder parameters during training. All hyperparamaters for all models (including hidden layers, hidden sizes, and dropout) were tuned using Vizier BIBREF24 via 10-fold cross validation on the training set maximizing for F1.
As a baseline, we also trained a linear Support-Vector Regression BIBREF25 model on $n$ -gram features ( $n$ ranges from 1 to 3).
Table 3 reports Pearson correlation coefficients between the predictions with each of the neural models and the ground truth difficulty scores. Rows 1-4 correspond to individual models, and row 5 reports the ensemble performance. Columns correspond to label type. Results from all models outperform the baseline SVR model: Pearson's correlation coefficients range from 0.550 to 0.622. The regression correlations are the lowest.
The RNN model realizes the strongest performance among the stand-alone (non-ensemble) models, outperforming variants that exploit CNN and USE representations. Combining the RNN and USE further improves results. We hypothesize that this is due to complementary sentence information encoded in universal representations.
For all models, correlations for Intervention and Outcomes are higher than for Population, which is expected given the difficulty distributions in Figure 1 . In these, the sentences are more uniformly distributed, with a fair number of difficult and easier sentences. By contrast, in Population there are a greater number of easy sentences and considerably fewer difficult sentences, which makes the difficulty ranking task particularly challenging.
Better IE with Difficulty Prediction
We next present experiments in which we attempt to use the predicted difficulty during training to improve models for information extraction of descriptions of Population, Interventions and Outcomes from medical article abstracts. We investigate two uses: (1) simply removing the most difficult sentences from the training set, and, (2) re-weighting the most difficult sentences.
We again use LSTM-CRF-Pattern as the base model and experimenting on the EBM-NLP corpus BIBREF5 . This is trained on either (1) the training set with difficult sentences removed, or (2) the full training set but with instances re-weighted in proportion to their predicted difficulty score. Following BIBREF5 , we use the Adam optimizer with learning rate of 0.001, decay 0.9, batch size 20 and dropout 0.5. We use pretrained 200d GloVe vectors BIBREF21 to initialize word embeddings, and use 100d hidden char representations. Each word is thus represented with 300 dimensions in total. The hidden size is 100 for the LSTM in the character representation component, and 200 for the LSTM in the information extraction component. We train for 15 epochs, saving parameters that achieve the best F1 score on a nested development set.
Removing Difficult Examples
We first evaluate changes in performance induced by training the sequence labeling model using less data by removing difficult sentences prior to training. The hypothesis here is that these difficult instances are likely to introduce more noise than signal. We used a cross-fold approach to predict sentence difficulties, training on 9/10ths of the data and scoring the remaining 1/10th at a time. We then sorted sentences by predicted difficulty scores, and experimented with removing increasing numbers of these (in order of difficulty) prior to training the LSTM-CRF-Pattern model.
Figure 3 shows the results achieved by the LSTM-CRF-Pattern model after discarding increasing amounts of the training data: the $x$ and $y$ axes correspond to the the percentage of data removed and F1 scores, respectively. We contrast removing sentences predicted to be difficult with removing them (a) randomly (i.i.d.), and, (b) in inverse order of predicted inter-annotator agreement. The agreement prediction model is trained exactly the same like difficult prediction model, with simply changing the difficult score to annotation agreement. F1 scores actually improve (marginally) when we remove the most difficult sentences, up until we drop 4% of the data for Population and Interventions, and 6% for Outcomes. Removing training points at i.i.d. random degrades performance, as expected. Removing sentences in order of disagreement seems to have similar effect as removing them by difficulty score when removing small amount of the data, but the F1 scores drop much faster when removing more data. These findings indicate that sentences predicted to be difficult are indeed noisy, to the extent that they do not seem to provide the model useful signal.
Re-weighting by Difficulty
We showed above that removing a small number of the most difficult sentences does not harm, and in fact modestly improves, medical IE model performance. However, using the available data we are unable to test if this will be useful in practice, as we would need additional data to determine how many difficult sentences should be dropped.
We instead explore an alternative, practical means of exploiting difficulty predictions: we re-weight sentences during training inversely to their predicted difficulty. Formally, we weight sentence $i$ with difficulty scores above $\tau $ according to: $1-a\cdot (d_i-\tau )/(1-\tau )$ , where $d_i$ is the difficulty score for sentence $i$ , and $a$ is a parameter codifying the minimum weight value. We set $\tau $ to 0.8 so as to only re-weight sentences with difficulty in the top 20th percentile, and we set $a$ to 0.5. The re-weighting is equivalent to down-sampling the difficult sentences. LSTM-CRF-Pattern is our base model.
Table 4 reports the precision, recall and F1 achieved both with and without sentence re-weighting. Re-weighting improves all metrics modestly but consistently. All F1 differences are statistically significant under a sign test ( $p<0.01$ ). The model with best precision is different for Patient, Intervention and Outcome labels. However re-weighting by difficulty does consistently yield the best recall for all three extraction types, with the most notable improvement for i and o, where recall improved by 10 percentage points. This performance increase translated to improvements in F1 across all types, as compared to the base model and to re-weighting by agreement.
Involving Expert Annotators
The preceding experiments demonstrate that re-weighting difficult sentences annotated by the crowd generally improves the extraction models. Presumably the performance is influenced by the annotation quality.
We now examine the possibility that the higher quality and more consistent annotations of domain experts on the difficult instances will benefit the extraction model. This simulates an annotation strategy in which we route difficult instances to domain experts and easier ones to crowd annotators. We also contrast the value of difficult data to that of an i.i.d. random sample of the same size, both annotated by experts.
Expert annotations of Random and Difficult Instances
We re-annotate by experts a subset of most difficult instances and the same number of random instances. As collecting annotations from experts is slow and expensive, we only re-annotate the difficult instances for the interventions extraction task. We re-annotate the abstracts which cover the sentences with predicted difficulty scores in the top 5 percentile. We rank the abstracts from the training set by the count of difficult sentences, and re-annotate the abstracts that contain the most difficult sentences. Constrained by time and budget, we select only 2000 abstracts for re-annotation; 1000 of these are top-ranked, and 1000 are randomly sampled. This re-annotation cost $3,000. We have released the new annotation data at: https://github.com/bepnye/EBM-NLP.
Following BIBREF5 , we recruited five medical experts via Up-work with advanced medical training and strong technical reading/writing skills. The expert annotator were asked to read the entire abstract and highlight, using the BRAT toolkit BIBREF26 , all spans describing medical Interventions. Each abstract is only annotated by one expert. We examined 30 re-annotated abstracts to ensure the annotation quality before hiring the annotator.
Table 5 presents the results of LSTM-CRF-Pattern model trained on the reannotated difficult subset and the random subset. The first two rows show the results for models trained with expert annotations. The model trained on random data has a slightly better F1 than that trained on the same amount of difficult data. The model trained on random data has higher precision but lower recall.
Rows 3 and 4 list the results for models trained on the same data but with crowd annotation. Models trained with expert-annotated data are clearly superior to those trained with crowd labels with respect to F1, indicating that the experts produced higher quality annotations. For crowdsourced annotations, training the model with data sampled at i.i.d. random achieves 2% higher F1 than when difficult instances are used. When expert annotations are used, this difference is less than 1%. This trend in performance may be explained by differences in annotation quality: the randomly sampled set was more consistently annotated by both experts and crowd because the difficult set is harder. However, in both cases expert annotations are better, with a bigger difference between the expert and crowd models on the difficult set.
The last row is the model trained on all 5k abstracts with crowd annotations. Its F1 score is lower than either expert model trained on only 20% of data, suggesting that expert annotations should be collected whenever possible. Again the crowd model on complete data has higher precision than expert models but its recall is much lower.
Routing To Experts or Crowd
So far a system was trained on one type of data, either labeled by crowd or experts. We now examine the performance of a system trained on data that was routed to either experts or crowd annotators depending on their predicted difficult. Given the results presented so far mixing annotators may be beneficial given their respective trade-offs of precision and recall. We use the annotations from experts for an abstract if it exists otherwise use crowd annotations. The results are presented in Table 6 .
Rows 1 and 2 repeat the performance of the models trained on difficult subset and random subset with expert annotations only respectively. The third row is the model trained by combining difficult and random subsets with expert annotations. There are around 250 abstracts in the overlap of these two sets, so there are total 1.75k abstracts used for training the D+R model. Rows 4 to 6 are the models trained on all 5k abstracts with mixed annotations, where Other means the rest of the abstracts with crowd annotation only.
The results show adding more training data with crowd annotation still improves at least 1 point F1 score in all three extraction tasks. The improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added. The model trained with re-annotating the difficult subset (D+Other) also outperforms the model with re-annotating the random subset (R+Other) by 2 points in F1. The model trained with re-annotating both of difficult and random subsets (D+R+Other), however, achieves only marginally higher F1 than the model trained with the re-annotated difficult subset (D+Other). In sum, the results clearly indicate that mixing expert and crowd annotations leads to better models than using solely crowd data, and better than using expert data alone. More importantly, there is greater gain in performance when instances are routed according to difficulty, as compared to randomly selecting the data for expert annotators. These findings align with our motivating hypothesis that annotation quality for difficult instances is important for final model performance. They also indicate that mixing annotations from expert and crowd could be an effective way to achieve acceptable model performance given a limited budget.
How Many Expert Annotations?
We established that crowd annotation are still useful in supplementing expert annotations for medical IE. Obtaining expert annotations for the one thousand most difficult instances greatly improved the model performance. However the choice of how many difficult instances to annotate was an uninformed choice. Here we check if less expert data would have yielded similar gains. Future work will need to address how best to choose this parameter for a routing system.
We simulate a routing scenario in which we send consecutive batches of the most difficult examples to the experts for annotation. We track changes in performance as we increase the number of most-difficult-articles sent to domain experts. As shown in Figure 4 , adding expert annotations for difficult articles consistently increases F1 scores. The performance gain is mostly from increased recall; the precision changes only a bit with higher quality annotation. This observation implies that crowd workers often fail to mark target tokens, but do not tend to produce large numbers of false positives. We suspect such failures to identify relevant spans/tokens are due to insufficient domain knowledge possessed by crowd workers.
The F1 score achieved after re-annotating the 600 most-difficult articles reaches 68.1%, which is close to the performance when re-annotating 1000 random articles. This demonstrates the effectiveness of recognizing difficult instances. The trend when we use up all expert data is still upward, so adding even more expert data is likely to further improve performance. Unfortunately we exhausted our budget and were not able to obtain additional expert annotations. It is likely that as the size of the expert annotations increases, the value of crowd annotations will diminish. This investigation is left for future work.
Conclusions
We have introduced the task of predicting annotation difficulty for biomedical information extraction (IE). We trained neural models using different learned representations to score texts in terms of their difficulty. Results from all models were strong with Pearson’s correlation coefficients higher than 0.45 in almost all evaluations, indicating the feasibility of this task. An ensemble model combining universal and task specific feature sentence vectors yielded the best results.
Experiments on biomedical IE tasks show that removing up to $\sim $ 10% of the sentences predicted to be most difficult did not decrease model performance, and that re-weighting sentences inversely to their difficulty score during training improves predictive performance. Simulations in which difficult examples are routed to experts and other instances to crowd annotators yields the best results, outperforming the strategy of randomly selecting data for expert annotation, and substantially improving upon the approach of relying exclusively on crowd annotations. In future work, routing strategies based on instance difficulty could be further investigated for budget-quality trade-off.
Acknowledgements
This work has been partially supported by NSF1748771 grant. Wallace was support in part by NIH/NLM R01LM012086.
|
How much higher quality is the resulting annotated data?
|
improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added
| 4,399
|
qasper
|
8k
|
Introduction
Event detection on microblogging platforms such as Twitter aims to detect events preemptively. A main task in event detection is detecting events of predetermined types BIBREF0, such as concerts or controversial events based on microposts matching specific event descriptions. This task has extensive applications ranging from cyber security BIBREF1, BIBREF2 to political elections BIBREF3 or public health BIBREF4, BIBREF5. Due to the high ambiguity and inconsistency of the terms used in microposts, event detection is generally performed though statistical machine learning models, which require a labeled dataset for model training. Data labeling is, however, a long, laborious, and usually costly process. For the case of micropost classification, though positive labels can be collected (e.g., using specific hashtags, or event-related date-time information), there is no straightforward way to generate negative labels useful for model training. To tackle this lack of negative labels and the significant manual efforts in data labeling, BIBREF1 (BIBREF1, BIBREF3) introduced a weak supervision based learning approach, which uses only positively labeled data, accompanied by unlabeled examples by filtering microposts that contain a certain keyword indicative of the event type under consideration (e.g., `hack' for cyber security). Another key technique in this context is expectation regularization BIBREF6, BIBREF7, BIBREF1. Here, the estimated proportion of relevant microposts in an unlabeled dataset containing a keyword is given as a keyword-specific expectation. This expectation is used in the regularization term of the model's objective function to constrain the posterior distribution of the model predictions. By doing so, the model is trained with an expectation on its prediction for microposts that contain the keyword. Such a method, however, suffers from two key problems:
Due to the unpredictability of event occurrences and the constantly changing dynamics of users' posting frequency BIBREF8, estimating the expectation associated with a keyword is a challenging task, even for domain experts;
The performance of the event detection model is constrained by the informativeness of the keyword used for model training. As of now, we lack a principled method for discovering new keywords and improve the model performance.
To address the above issues, we advocate a human-AI loop approach for discovering informative keywords and estimating their expectations reliably. Our approach iteratively leverages 1) crowd workers for estimating keyword-specific expectations, and 2) the disagreement between the model and the crowd for discovering new informative keywords. More specifically, at each iteration after we obtain a keyword-specific expectation from the crowd, we train the model using expectation regularization and select those keyword-related microposts for which the model's prediction disagrees the most with the crowd's expectation; such microposts are then presented to the crowd to identify new keywords that best explain the disagreement. By doing so, our approach identifies new keywords which convey more relevant information with respect to existing ones, thus effectively boosting model performance. By exploiting the disagreement between the model and the crowd, our approach can make efficient use of the crowd, which is of critical importance in a human-in-the-loop context BIBREF9, BIBREF10. An additional advantage of our approach is that by obtaining new keywords that improve model performance over time, we are able to gain insight into how the model learns for specific event detection tasks. Such an advantage is particularly useful for event detection using complex models, e.g., deep neural networks, which are intrinsically hard to understand BIBREF11, BIBREF12. An additional challenge in involving crowd workers is that their contributions are not fully reliable BIBREF13. In the crowdsourcing literature, this problem is usually tackled with probabilistic latent variable models BIBREF14, BIBREF15, BIBREF16, which are used to perform truth inference by aggregating a redundant set of crowd contributions. Our human-AI loop approach improves the inference of keyword expectation by aggregating contributions not only from the crowd but also from the model. This, however, comes with its own challenge as the model's predictions are further dependent on the results of expectation inference, which is used for model training. To address this problem, we introduce a unified probabilistic model that seamlessly integrates expectation inference and model training, thereby allowing the former to benefit from the latter while resolving the inter-dependency between the two.
To the best of our knowledge, we are the first to propose a human-AI loop approach that iteratively improves machine learning models for event detection. In summary, our work makes the following key contributions:
A novel human-AI loop approach for micropost event detection that jointly discovers informative keywords and estimates their expectation;
A unified probabilistic model that infers keyword expectation and simultaneously performs model training;
An extensive empirical evaluation of our approach on multiple real-world datasets demonstrating that our approach significantly improves the state of the art by an average of 24.3% AUC.
The rest of this paper is organized as follows. First, we present our human-AI loop approach in Section SECREF2. Subsequently, we introduce our proposed probabilistic model in Section SECREF3. The experimental setup and results are presented in Section SECREF4. Finally, we briefly cover related work in Section SECREF5 before concluding our work in Section SECREF6.
The Human-AI Loop Approach
Given a set of labeled and unlabeled microposts, our goal is to extract informative keywords and estimate their expectations in order to train a machine learning model. To achieve this goal, our proposed human-AI loop approach comprises two crowdsourcing tasks, i.e., micropost classification followed by keyword discovery, and a unified probabilistic model for expectation inference and model training. Figure FIGREF6 presents an overview of our approach. Next, we describe our approach from a process-centric perspective.
Following previous studies BIBREF1, BIBREF17, BIBREF2, we collect a set of unlabeled microposts $\mathcal {U}$ from a microblogging platform and post-filter, using an initial (set of) keyword(s), those microposts that are potentially relevant to an event category. Then, we collect a set of event-related microposts (i.e., positively labeled microposts) $\mathcal {L}$, post-filtering with a list of seed events. $\mathcal {U}$ and $\mathcal {L}$ are used together to train a discriminative model (e.g., a deep neural network) for classifying the relevance of microposts to an event. We denote the target model as $p_\theta (y|x)$, where $\theta $ is the model parameter to be learned and $y$ is the label of an arbitrary micropost, represented by a bag-of-words vector $x$. Our approach iterates several times $t=\lbrace 1, 2, \ldots \rbrace $ until the performance of the target model converges. Each iteration starts from the initial keyword(s) or the new keyword(s) discovered in the previous iteration. Given such a keyword, denoted by $w^{(t)}$, the iteration starts by sampling microposts containing the keyword from $\mathcal {U}$, followed by dynamically creating micropost classification tasks and publishing them on a crowdsourcing platform.
Micropost Classification. The micropost classification task requires crowd workers to label the selected microposts into two classes: event-related and non event-related. In particular, workers are given instructions and examples to differentiate event-instance related microposts and general event-category related microposts. Consider, for example, the following microposts in the context of Cyber attack events, both containing the keyword `hack':
Credit firm Equifax says 143m Americans' social security numbers exposed in hack
This micropost describes an instance of a cyber attack event that the target model should identify. This is, therefore, an event-instance related micropost and should be considered as a positive example. Contrast this with the following example:
Companies need to step their cyber security up
This micropost, though related to cyber security in general, does not mention an instance of a cyber attack event, and is of no interest to us for event detection. This is an example of a general event-category related micropost and should be considered as a negative example.
In this task, each selected micropost is labeled by multiple crowd workers. The annotations are passed to our probabilistic model for expectation inference and model training.
Expectation Inference & Model Training. Our probabilistic model takes crowd-contributed labels and the model trained in the previous iteration as input. As output, it generates a keyword-specific expectation, denoted as $e^{(t)}$, and an improved version of the micropost classification model, denoted as $p_{\theta ^{(t)}}(y|x)$. The details of our probabilistic model are given in Section SECREF3.
Keyword Discovery. The keyword discovery task aims at discovering a new keyword (or a set of keywords) that is most informative for model training with respect to existing keywords. To this end, we first apply the current model $p_{\theta ^{(t)}}(y|x)$ on the unlabeled microposts $\mathcal {U}$. For those that contain the keyword $w^{(t)}$, we calculate the disagreement between the model predictions and the keyword-specific expectation $e^{(t)}$:
and select the ones with the highest disagreement for keyword discovery. These selected microposts are supposed to contain information that can explain the disagreement between the model prediction and keyword-specific expectation, and can thus provide information that is most different from the existing set of keywords for model training.
For instance, our study shows that the expectation for the keyword `hack' is 0.20, which means only 20% of the initial set of microposts retrieved with the keyword are event-related. A micropost selected with the highest disagreement (Eq. DISPLAY_FORM7), whose likelihood of being event-related as predicted by the model is $99.9\%$, is shown as an example below:
RT @xxx: Hong Kong securities brokers hit by cyber attacks, may face more: regulator #cyber #security #hacking https://t.co/rC1s9CB
This micropost contains keywords that can better indicate the relevance to a cyber security event than the initial keyword `hack', e.g., `securities', `hit', and `attack'.
Note that when the keyword-specific expectation $e^{(t)}$ in Equation DISPLAY_FORM7 is high, the selected microposts will be the ones that contain keywords indicating the irrelevance of the microposts to an event category. Such keywords are also useful for model training as they help improve the model's ability to identify irrelevant microposts.
To identify new keywords in the selected microposts, we again leverage crowdsourcing, as humans are typically better than machines at providing specific explanations BIBREF18, BIBREF19. In the crowdsourcing task, workers are first asked to find those microposts where the model predictions are deemed correct. Then, from those microposts, workers are asked to find the keyword that best indicates the class of the microposts as predicted by the model. The keyword most frequently identified by the workers is then used as the initial keyword for the following iteration. In case multiple keywords are selected, e.g., the top-$N$ frequent ones, workers will be asked to perform $N$ micropost classification tasks for each keyword in the next iteration, and the model training will be performed on multiple keyword-specific expectations.
Unified Probabilistic Model
This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously. We start by formalizing the problem and introducing our model, before describing the model learning method.
Problem Formalization. We consider the problem at iteration $t$ where the corresponding keyword is $w^{(t)}$. In the current iteration, let $\mathcal {U}^{(t)} \subset \mathcal {U}$ denote the set of all microposts containing the keyword and $\mathcal {M}^{(t)}= \lbrace x_{m}\rbrace _{m=1}^M\subset \mathcal {U}^{(t)}$ be the randomly selected subset of $M$ microposts labeled by $N$ crowd workers $\mathcal {C} = \lbrace c_n\rbrace _{n=1}^N$. The annotations form a matrix $\mathbf {A}\in \mathbb {R}^{M\times N}$ where $\mathbf {A}_{mn}$ is the label for the micropost $x_m$ contributed by crowd worker $c_n$. Our goal is to infer the keyword-specific expectation $e^{(t)}$ and train the target model by learning the model parameter $\theta ^{(t)}$. An additional parameter of our probabilistic model is the reliability of crowd workers, which is essential when involving crowdsourcing. Following Dawid and Skene BIBREF14, BIBREF16, we represent the annotation reliability of worker $c_n$ by a latent confusion matrix $\pi ^{(n)}$, where the $rs$-th element $\pi _{rs}^{(n)}$ denotes the probability of $c_n$ labeling a micropost as class $r$ given the true class $s$.
Unified Probabilistic Model ::: Expectation as Model Posterior
First, we introduce an expectation regularization technique for the weakly supervised learning of the target model $p_{\theta ^{(t)}}(y|x)$. In this setting, the objective function of the target model is composed of two parts, corresponding to the labeled microposts $\mathcal {L}$ and the unlabeled ones $\mathcal {U}$.
The former part aims at maximizing the likelihood of the labeled microposts:
where we assume that $\theta $ is generated from a prior distribution (e.g., Laplacian or Gaussian) parameterized by $\sigma $.
To leverage unlabeled data for model training, we make use of the expectations of existing keywords, i.e., {($w^{(1)}$, $e^{(1)}$), ..., ($w^{(t-1)}$, $e^{(t-1)}$), ($w^{(t)}$, $e^{(t)}$)} (Note that $e^{(t)}$ is inferred), as a regularization term to constrain model training. To do so, we first give the model's expectation for each keyword $w^{(k)}$ ($1\le k\le t$) as follows:
which denotes the empirical expectation of the model’s posterior predictions on the unlabeled microposts $\mathcal {U}^{(k)}$ containing keyword $w^{(k)}$. Expectation regularization can then be formulated as the regularization of the distance between the Bernoulli distribution parameterized by the model's expectation and the expectation of the existing keyword:
where $D_{KL}[\cdot \Vert \cdot ]$ denotes the KL-divergence between the Bernoulli distributions $Ber(e^{(k)})$ and $Ber(\mathbb {E}_{x\sim \mathcal {U}^{(k)}}(y))$, and $\lambda $ controls the strength of expectation regularization.
Unified Probabilistic Model ::: Expectation as Class Prior
To learn the keyword-specific expectation $e^{(t)}$ and the crowd worker reliability $\pi ^{(n)}$ ($1\le n\le N$), we model the likelihood of the crowd-contributed labels $\mathbf {A}$ as a function of these parameters. In this context, we view the expectation as the class prior, thus performing expectation inference as the learning of the class prior. By doing so, we connect expectation inference with model training.
Specifically, we model the likelihood of an arbitrary crowd-contributed label $\mathbf {A}_{mn}$ as a mixture of multinomials where the prior is the keyword-specific expectation $e^{(t)}$:
where $e_s^{(t)}$ is the probability of the ground truth label being $s$ given the keyword-specific expectation as the class prior; $K$ is the set of possible ground truth labels (binary in our context); and $r=\mathbf {A}_{mn}$ is the crowd-contributed label. Then, for an individual micropost $x_m$, the likelihood of crowd-contributed labels $\mathbf {A}_{m:}$ is given by:
Therefore, the objective function for maximizing the likelihood of the entire annotation matrix $\mathbf {A}$ can be described as:
Unified Probabilistic Model ::: Unified Probabilistic Model
Integrating model training with expectation inference, the overall objective function of our proposed model is given by:
Figure FIGREF18 depicts a graphical representation of our model, which combines the target model for training (on the left) with the generative model for crowd-contributed labels (on the right) through a keyword-specific expectation.
Model Learning. Due to the unknown ground truth labels of crowd-annotated microposts ($y_m$ in Figure FIGREF18), we resort to expectation maximization for model learning. The learning algorithm iteratively takes two steps: the E-step and the M-step. The E-step infers the ground truth labels given the current model parameters. The M-step updates the model parameters, including the crowd reliability parameters $\pi ^{(n)}$ ($1\le n\le N$), the keyword-specific expectation $e^{(t)}$, and the parameter of the target model $\theta ^{(t)}$. The E-step and the crowd parameter update in the M-step are similar to the Dawid-Skene model BIBREF14. The keyword expectation is inferred by taking into account both the crowd-contributed labels and the model prediction:
The parameter of the target model is updated by gradient descent. For example, when the target model to be trained is a deep neural network, we use back-propagation with gradient descent to update the weight matrices.
Experiments and Results
This section presents our experimental setup and results for evaluating our approach. We aim at answering the following questions:
[noitemsep,leftmargin=*]
Q1: How effectively does our proposed human-AI loop approach enhance the state-of-the-art machine learning models for event detection?
Q2: How well does our keyword discovery method work compare to existing keyword expansion methods?
Q3: How effective is our approach using crowdsourcing at obtaining new keywords compared with an approach labelling microposts for model training under the same cost?
Q4: How much benefit does our unified probabilistic model bring compared to methods that do not take crowd reliability into account?
Experiments and Results ::: Experimental Setup
Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets.
Comparison Methods. To demonstrate the generality of our approach on different event detection models, we consider Logistic Regression (LR) BIBREF1 and Multilayer Perceptron (MLP) BIBREF2 as the target models. As the goal of our experiments is to demonstrate the effectiveness of our approach as a new model training technique, we use these widely used models. Also, we note that in our case other neural network models with more complex network architectures for event detection, such as the bi-directional LSTM BIBREF17, turn out to be less effective than a simple feedforward network. For both LR and MLP, we evaluate our proposed human-AI loop approach for keyword discovery and expectation estimation by comparing against the weakly supervised learning method proposed by BIBREF1 (BIBREF1) and BIBREF17 (BIBREF17) where only one initial keyword is used with an expectation estimated by an individual expert.
Parameter Settings. We empirically set optimal parameters based on a held-out validation set that contains 20% of the test data. These include the hyperparamters of the target model, those of our proposed probabilistic model, and the parameters used for training the target model. We explore MLP with 1, 2 and 3 hidden layers and apply a grid search in 32, 64, 128, 256, 512 for the dimension of the embeddings and that of the hidden layers. For the coefficient of expectation regularization, we follow BIBREF6 (BIBREF6) and set it to $\lambda =10 \times $ #labeled examples. For model training, we use the Adam BIBREF22 optimization algorithm for both models.
Evaluation. Following BIBREF1 (BIBREF1) and BIBREF3 (BIBREF3), we use accuracy and area under the precision-recall curve (AUC) metrics to measure the performance of our proposed approach. We note that due to the imbalance in our datasets (20% positive microposts in CyberAttack and 27% in PoliticianDeath), accuracy is dominated by negative examples; AUC, in comparison, better characterizes the discriminative power of the model.
Crowdsourcing. We chose Level 3 workers on the Figure-Eight crowdsourcing platform for our experiments. The inter-annotator agreement in micropost classification is taken into account through the EM algorithm. For keyword discovery, we filter keywords based on the frequency of the keyword being selected by the crowd. In terms of cost-effectiveness, our approach is motivated from the fact that crowdsourced data annotation can be expensive, and is thus designed with minimal crowd involvement. For each iteration, we selected 50 tweets for keyword discovery and 50 tweets for micropost classification per keyword. For a dataset with 80k tweets (e.g., CyberAttack), our approach only requires to manually inspect 800 tweets (for 8 keywords), which is only 1% of the entire dataset.
Experiments and Results ::: Results of our Human-AI Loop (Q1)
Table TABREF26 reports the evaluation of our approach on both the CyberAttack and PoliticianDeath event categories. Our approach is configured such that each iteration starts with 1 new keyword discovered in the previous iteration.
Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance. We observe that the target models generally converge between the 7th and 9th iteration on both datasets when performance is measured by AUC. The performance can slightly degrade when the models are further trained for more iterations on both datasets. This is likely due to the fact that over time, the newly discovered keywords entail lower novel information for model training. For instance, for the CyberAttack dataset the new keyword in the 9th iteration `election' frequently co-occurs with the keyword `russia' in the 5th iteration (in microposts that connect Russian hackers with US elections), thus bringing limited new information for improving the model performance. As a side remark, we note that the models converge faster when performance is measured by accuracy. Such a comparison result confirms the difference between the metrics and shows the necessity for more keywords to discriminate event-related microposts from non event-related ones.
Experiments and Results ::: Comparative Results on Keyword Discovery (Q2)
Figure FIGREF31 shows the evaluation of our approach when discovering new informative keywords for model training (see Section SECREF2: Keyword Discovery). We compare our human-AI collaborative way of discovering new keywords against a query expansion (QE) approach BIBREF23, BIBREF24 that leverages word embeddings to find similar words in the latent semantic space. Specifically, we use pre-trained word embeddings based on a large Google News dataset for query expansion. For instance, the top keywords resulting from QE for `politician' are, `deputy',`ministry',`secretary', and `minister'. For each of these keywords, we use the crowd to label a set of tweets and obtain a corresponding expectation.
We observe that our approach consistently outperforms QE by an average of $4.62\%$ and $52.58\%$ AUC on CyberAttack and PoliticianDeath, respectively. The large gap between the performance improvements for the two datasets is mainly due to the fact that microposts that are relevant for PoliticianDeath are semantically more complex than those for CyberAttack, as they encode noun-verb relationship (e.g., “the king of ... died ...”) rather than a simple verb (e.g., “... hacked.”) for the CyberAttack microposts. QE only finds synonyms of existing keywords related to either `politician' or `death', however cannot find a meaningful keyword that fully characterizes the death of a politician. For instance, QE finds the keywords `kill' and `murder', which are semantically close to `death' but are not specifically relevant to the death of a politician. Unlike QE, our approach identifies keywords that go beyond mere synonyms and that are more directly related to the end task, i.e., discriminating event-related microposts from non related ones. Examples are `demise' and `condolence'. As a remark, we note that in Figure FIGREF31(b), the increase in QE performance on PoliticianDeath is due to the keywords `deputy' and `minister', which happen to be highly indicative of the death of a politician in our dataset; these keywords are also identified by our approach.
Experiments and Results ::: Cost-Effectiveness Results (Q3)
To demonstrate the cost-effectiveness of using crowdsourcing for obtaining new keywords and consequently, their expectations, we compare the performance of our approach with an approach using crowdsourcing to only label microposts for model training at the same cost. Specifically, we conducted an additional crowdsourcing experiment where the same cost used for keyword discovery in our approach is used to label additional microposts for model training. These newly labeled microposts are used with the microposts labeled in the micropost classification task of our approach (see Section SECREF2: Micropost Classification) and the expectation of the initial keyword to train the model for comparison. The model trained in this way increases AUC by 0.87% for CyberAttack, and by 1.06% for PoliticianDeath; in comparison, our proposed approach increases AUC by 33.42% for PoliticianDeath and by 15.23% for CyberAttack over the baseline presented by BIBREF1). These results show that using crowdsourcing for keyword discovery is significantly more cost-effective than simply using crowdsourcing to get additional labels when training the model.
Experiments and Results ::: Expectation Inference Results (Q4)
To investigate the effectiveness of our expectation inference method, we compare it against a majority voting approach, a strong baseline in truth inference BIBREF16. Figure FIGREF36 shows the result of this evaluation. We observe that our approach results in better models for both CyberAttack and PoliticianDeath. Our manual investigation reveals that workers' annotations are of high reliability, which explains the relatively good performance of majority voting. Despite limited margin for improvement, our method of expectation inference improves the performance of majority voting by $0.4\%$ and $1.19\%$ AUC on CyberAttack and PoliticianDeath, respectively.
Related Work
Event Detection. The techniques for event extraction from microblogging platforms can be classified according to their domain specificity and their detection method BIBREF0. Early works mainly focus on open domain event detection BIBREF25, BIBREF26, BIBREF27. Our work falls into the category of domain-specific event detection BIBREF21, which has drawn increasing attention due to its relevance for various applications such as cyber security BIBREF1, BIBREF2 and public health BIBREF4, BIBREF5. In terms of technique, our proposed detection method is related to the recently proposed weakly supervised learning methods BIBREF1, BIBREF17, BIBREF3. This comes in contrast with fully-supervised learning methods, which are often limited by the size of the training data (e.g., a few hundred examples) BIBREF28, BIBREF29.
Human-in-the-Loop Approaches. Our work extends weakly supervised learning methods by involving humans in the loop BIBREF13. Existing human-in-the-loop approaches mainly leverage crowds to label individual data instances BIBREF9, BIBREF10 or to debug the training data BIBREF30, BIBREF31 or components BIBREF32, BIBREF33, BIBREF34 of a machine learning system. Unlike these works, we leverage crowd workers to label sampled microposts in order to obtain keyword-specific expectations, which can then be generalized to help classify microposts containing the same keyword, thus amplifying the utility of the crowd. Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model.
Conclusion
In this paper, we presented a new human-AI loop approach for keyword discovery and expectation estimation to better train event detection models. Our approach takes advantage of the disagreement between the crowd and the model to discover informative keywords and leverages the joint power of the crowd and the model in expectation inference. We evaluated our approach on real-world datasets and showed that it significantly outperforms the state of the art and that it is particularly useful for detecting events where relevant microposts are semantically complex, e.g., the death of a politician. As future work, we plan to parallelize the crowdsourcing tasks and optimize our pipeline in order to use our event detection approach in real-time.
Acknowledgements
This project has received funding from the Swiss National Science Foundation (grant #407540_167320 Tighten-it-All) and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 683253/GraphInt).
|
What type of classifiers are used?
|
probabilistic model
| 4,475
|
qasper
|
8k
|
Introduction
Users of photo-sharing websites such as Flickr often provide short textual descriptions in the form of tags to help others find the images. With the availability of GPS systems in current electronic devices such as smartphones, latitude and longitude coordinates are nowadays commonly made available as well. The tags associated with such georeferenced photos often describe the location where these photos were taken, and Flickr can thus be regarded as a source of environmental information. The use of Flickr for modelling urban environments has already received considerable attention. For instance, various approaches have been proposed for modelling urban regions BIBREF0 , and for identifying points-of-interest BIBREF1 and itineraries BIBREF2 , BIBREF3 . However, the usefulness of Flickr for characterizing the natural environment, which is the focus of this paper, is less well-understood.
Many recent studies have highlighted that Flickr tags capture valuable ecological information, which can be used as a complementary source to more traditional sources. To date, however, ecologists have mostly used social media to conduct manual evaluations of image content with little automated exploitation of the associated tags BIBREF4 , BIBREF5 , BIBREF6 . One recent exception is BIBREF7 , where bag-of-words representations derived from Flickr tags were found to give promising result for predicting a range of different environemental phenomena.
Our main hypothesis in this paper is that by using vector space embeddings instead of bag-of-words representations, the ecological information which is implicitly captured by Flickr tags can be utilized in a more effective way. Vector space embeddings are representations in which the objects from a given domain are encoded using relatively low-dimensional vectors. They have proven useful in natural language processing, especially for encoding word meaning BIBREF8 , BIBREF9 , and in machine learning more generally. In this paper, we are interested in the use of such representations for modelling geographic locations. Our main motivation for using vector space embeddings is that they allow us to integrate the textual information we get from Flickr with available structured information in a very natural way. To this end, we rely on an adaptation of the GloVe word embedding model BIBREF9 , but rather than learning word vectors, we learn vectors representing locations. Similar to how the representation of a word in GloVe is determined by the context words surrounding it, the representation of a location in our model is determined by the tags of the photos that have been taken near that location. To incorporate numerical features from structured environmental datasets (e.g. average temperature), we associate with each such feature a linear mapping that can be used to predict that feature from a given location vector. This is inspired by the fact that salient properties of a given domain can often be modelled as directions in vector space embeddings BIBREF10 , BIBREF11 , BIBREF12 . Finally, evidence from categorical datasets (e.g. land cover types) is taken into account by requiring that locations belonging to the same category are represented using similar vectors, similar to how semantic types are sometimes modelled in the context of knowledge graph embedding BIBREF13 .
While our point-of-departure is a standard word embedding model, we found that the off-the-shelf GloVe model performed surprisingly poorly, meaning that a number of modifications are needed to achieve good results. Our main findings are as follows. First, given that the number of tags associated with a given location can be quite small, it is important to apply some kind of spatial smoothing, i.e. the importance of a given tag for a given location should not only depend on the occurrences of the tag at that location, but also on its occurrences at nearby locations. To this end, we use a formulation which is based on spatially smoothed version of pointwise mutual information. Second, given the wide diversity in the kind of information that is covered by Flickr tags, we find that term selection is in some cases critical to obtain vector spaces that capture the relevant aspects of geographic locations. For instance, many tags on Flickr refer to photography related terms, which we would normally not want to affect the vector representation of a given location. Finally, even with these modifications, vector space embeddings learned from Flickr tags alone are sometimes outperformed by bag-of-words representations. However, our vector space embeddings lead to substantially better predictions in cases where structured (scientific) information is also taken into account. In this sense, the main value of using vector space embeddings in this context is not so much about abstracting away from specific tag usages, but rather about the fact that such representations allow us to integrate numerical and categorical features in a much more natural way than is possible with bag-of-words representations.
The remainder of this paper is organized as follows. In the next section, we provide a discussion of existing work. Section SECREF3 then presents our model for embedding geographic locations from Flickr tags and structured data. Next, in Section SECREF4 we provide a detailed discussion about the experimental results. Finally, Section SECREF5 summarizes our conclusions.
Vector space embeddings
The use of low-dimensional vector space embeddings for representing objects has already proven effective in a large number of applications, including natural language processing (NLP), image processing, and pattern recognition. In the context of NLP, the most prominent example is that of word embeddings, which represent word meaning using vectors of typically around 300 dimensions. A large number of different methods for learning such word embeddings have already been proposed, including Skip-gram and the Continuous Bag-of-Words (CBOW) model BIBREF8 , GloVe BIBREF9 , and fastText BIBREF14 . They have been applied effectively in many downstream NLP tasks such as sentiment analysis BIBREF15 , part of speech tagging BIBREF16 , BIBREF17 , and text classification BIBREF18 , BIBREF19 . The model we consider in this paper builds on GloVe, which was designed to capture linear regularities of word-word co-occurrence. In GloVe, there are two word vectors INLINEFORM0 and INLINEFORM1 for each word in the vocabulary, which are learned by minimizing the following objective: DISPLAYFORM0
where INLINEFORM0 is the number of times that word INLINEFORM1 appears in the context of word INLINEFORM2 , INLINEFORM3 is the vocabulary size, INLINEFORM4 is the target word bias, INLINEFORM5 is the context word bias. The weighting function INLINEFORM6 is used to limit the impact of rare terms. It is defined as 1 if INLINEFORM7 and as INLINEFORM8 otherwise, where INLINEFORM9 is usually fixed to 100 and INLINEFORM10 to 0.75. Intuitively, the target word vectors INLINEFORM11 correspond to the actual word representations which we would like to find, while the context word vectors INLINEFORM12 model how occurrences of INLINEFORM13 in the context of a given word INLINEFORM14 affect the representation of this latter word. In this paper we will use a similar model, which will however be aimed at learning location vectors instead of the target word vectors.
Beyond word embeddings, various methods have been proposed for learning vector space representations from structured data such as knowledge graphs BIBREF20 , BIBREF21 , BIBREF22 , social networks BIBREF23 , BIBREF24 and taxonomies BIBREF25 , BIBREF26 . The idea of combining a word embedding model with structured information has also been explored by several authors, for example to improve the word embeddings based on information coming from knowledge graphs BIBREF27 , BIBREF28 . Along similar lines, various lexicons have been used to obtain word embeddings that are better suited at modelling sentiment BIBREF15 and antonymy BIBREF29 , among others. The method proposed by BIBREF30 imposes the condition that words that belong to the same semantic category are closer together than words from different categories, which is somewhat similar in spirit to how we will model categorical datasets in our model.
Embeddings for geographic information
The problem of representing geographic locations using embeddings has also attracted some attention. An early example is BIBREF31 , which used principal component analysis and stacked autoencoders to learn low-dimensional vector representations of city neighbourhoods based on census data. They use these representations to predict attributes such as crime, which is not included in the given census data, and find that in most of the considered evaluation tasks, the low-dimensional vector representations lead to more faithful predictions than the original high-dimensional census data.
Some existing works combine word embedding models with geographic coordinates. For example, in BIBREF32 an approach is proposed to learn word embeddings based on the assumption that words which tend to be used in the same geographic locations are likely to be similar. Note that their aim is dual to our aim in this paper: while they use geographic location to learn word vectors, we use textual descriptions to learn vectors representing geographic locations.
Several methods also use word embedding models to learn representations of Points-of-Interest (POIs) that can be used for predicting user visits BIBREF33 , BIBREF34 , BIBREF35 . These works use the machinery of existing word embedding models to learn POI representations, intuitively by letting sequences of POI visits by a user play the role of sequences of words in a sentence. In other words, despite the use of word embedding models, many of these approaches do not actually consider any textual information. For example, in BIBREF34 the Skip-gram model is utilized to create a global pattern of users' POIs. Each location was treated as a word and the other locations visited before or after were treated as context words. They then use a pair-wise ranking loss BIBREF36 which takes into account the user's location visit frequency to personalize the location recommendations. The methods of BIBREF34 were extended in BIBREF35 to use a temporal embedding and to take more account of geographic context, in particular the distances between preferred and non-preferred neighboring POIs, to create a “geographically hierarchical pairwise preference ranking model”. Similarly, in BIBREF37 the CBOW model was trained with POI data. They ordered POIs spatially within the traffic-based zones of urban areas. The ordering was used to generate characteristic vectors of POI types. Zone vectors represented by averaging the vectors of the POIs contained in them, were then used as features to predict land use types. In the CrossMap method BIBREF38 they learned embeddings for spatio-temporal hotspots obtained from social media data of locations, times and text. In one form of embedding, intended to enable reconstruction of records, neighbourhood relations in space and time were encoded by averaging hotspots in a target location's spatial and temporal neighborhoods. They also proposed a graph-based embedding method with nodes of location, time and text. The concatenation of the location, time and text vectors were then used as features to predict peoples' activities in urban environments. Finally, in BIBREF39 , a method is proposed that uses the Skip-gram model to represent POI types, based on the intuition that the vector representing a given POI type should be predictive of the POI types that found near places of that type.
Our work is different from these studies, as our focus is on representing locations based on a given text description of that location (in the form of Flickr tags), along with numerical and categorical features from scientific datasets.
Analyzing Flickr tags
Many studies have focused on analyzing Flickr tags to extract useful information in domains such as linguistics BIBREF40 , geography BIBREF0 , BIBREF41 , and ecology BIBREF42 , BIBREF7 , BIBREF43 . Most closely related to our work, BIBREF7 found that the tags of georeferenced Flickr photos can effectively supplement traditional scientific environmental data in tasks such as predicting climate features, land cover, species occurrence, and human assessments of scenicness. To encode locations, they simply combine a bag-of-words representation of geographically nearby tags with a feature vector that encodes associated structured scientific data. They found that the predictive value of Flickr tags is roughly on a par with that of the scientific datasets, and that combining both types of information leads to significantly better results than using either of them alone. As we show in this paper, however, their straightforward way of combining both information sources, by concatenating the two types of feature vectors, is far from optimal.
Despite the proven importance of Flickr tags, the problem of embedding Flickr tags has so far received very limited attention. To the best of our knowledge, BIBREF44 is the only work that generated embeddings for Flickr tags. However, their focus was on learning embeddings that capture word meaning (being evaluated on word similarity tasks), whereas we use such embeddings as part of our method for representing locations.
Model Description
In this section, we introduce our embedding model, which combines Flickr tags and structured scientific information to represent a set of locations INLINEFORM0 . The proposed model has the following form: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters to control the importance of each component in the model. Component INLINEFORM2 will be used to constrain the representation of the locations based on their textual description (i.e. Flickr tags), INLINEFORM3 will be used to constrain the representation of the locations based on their numerical features, and INLINEFORM4 will impose the constraint that locations belonging to the same category should be close together in the space. We will discuss each of these components in more detail in the following sections.
Tag Based Location Embedding
Many of the tags associated with Flickr photos describe characteristics of the places where these photos were taken BIBREF45 , BIBREF46 , BIBREF47 . For example, tags may correspond to place names (e.g. Brussels, England, Scandinavia), landmarks (e.g. Eiffel Tower, Empire State Building) or land cover types (e.g. mountain, forest, beach). To allow us to build location models using such tags, we collected the tags and meta-data of 70 million Flickr photos with coordinates in Europe (which is the region our experiments will focus on), all of which were uploaded to Flickr before the end of September 2015. In this section we first explain how tags can be weighted to obtain bag-of-words representations of locations from Flickr. Subsequently we describe a tag selection method, which will allow us to specialize the embedding depending on which aspects of the considered locations are of interest, after which we discuss the actual embedding model.
Tag weighting. Let INLINEFORM0 be a set of geographic locations, each characterized by latitude and longitude coordinates. To generate a bag-of-words representation of a given location, we have to weight the relevance of each tag to that location. To this end, we have followed the weighting scheme from BIBREF7 , which combines a Gaussian kernel (to model spatial proximity) with Positive Pointwise Mutual Information (PPMI) BIBREF48 , BIBREF49 .
Let us write INLINEFORM0 for the set of users who have assigned tag INLINEFORM1 to a photo with coordinates near INLINEFORM2 . To assess how relevant INLINEFORM3 is to the location INLINEFORM4 , the number of times INLINEFORM5 occurs in photos near INLINEFORM6 is clearly an important criterion. However, rather than simply counting the number of occurrences within some fixed radius, we use a Gaussian kernel to weight the tag occurrences according to their distance from that location: INLINEFORM7
where the threshold INLINEFORM0 is assumed to be fixed, INLINEFORM1 is the location of a Flickr photo, INLINEFORM2 is the Haversine distance, and we will assume that the bandwidth parameter INLINEFORM3 is set to INLINEFORM4 . A tag occurrence is counted only once for all photos by the same user at the same location, which is important to reduce the impact of bulk uploading. The value INLINEFORM5 reflects how frequent tag INLINEFORM6 is near location INLINEFORM7 , but it does not yet take into account the total number of tag occurrences near INLINEFORM8 , nor how popular the tag INLINEFORM9 is overall. To measure how strongly tag INLINEFORM10 is associated with location INLINEFORM11 , we use PPMI, which is a commonly used measure of association in natural language processing. However, rather than estimating PPMI scores from term frequencies, we will use the INLINEFORM12 values instead: INLINEFORM13
where: INLINEFORM0
with INLINEFORM0 the set of all tags, and INLINEFORM1 the set of locations.
Tag selection. Inspired by BIBREF50 , we use a term selection method in order to focus on the tags that are most important for the tasks that we want to consider and reduce the impact of tags that might relate only to a given individual or a group of users. In particular, we obtained good results with a method based on Kullback-Leibler (KL) divergence, which is based on BIBREF51 . Let INLINEFORM0 be a set of (mutually exclusive) properties of locations in which we are interested (e.g. land cover categories). For the ease of presentation, we will identify INLINEFORM1 with the set of locations that have the corresponding property. Then, we select tags from INLINEFORM2 that maximize the following score: INLINEFORM3
where INLINEFORM0 is the probability that a photo with tag INLINEFORM1 has a location near INLINEFORM2 and INLINEFORM3 is the probability that an arbitrary tag occurrence is assigned to a photo near a location in INLINEFORM4 . Since INLINEFORM5 often has to be estimated from a small number of tag occurrences, it is estimated using Bayesian smoothing: INLINEFORM6
where INLINEFORM0 is a parameter controlling the amount of smoothing, which will be tuned in the experiments. On the other hand, for INLINEFORM1 we can simply use a maximum likelihood estimation: INLINEFORM2
Location embedding. We now want to find a vector INLINEFORM0 for each location INLINEFORM1 such that similar locations are represented using similar vectors. To achieve this, we use a close variant of the GloVe model, where tag occurrences are treated as context words of geographic locations. In particular, with each location INLINEFORM2 we associate a vector INLINEFORM3 and with each tag INLINEFORM4 we associate a vector INLINEFORM5 and a bias term INLINEFORM6 , and consider the following objective (which in our full model ( EQREF7 ) will be combined with components that are derived from the structured information): INLINEFORM7
Note how tags play the role of the context words in the GloVe model, while instead of learning target word vectors we now learn location vectors. In contrast to GloVe, our objective does not directly refer to co-occurrence statistics, but instead uses the INLINEFORM0 scores. One important consequence of this is that we can also consider pairs INLINEFORM1 for which INLINEFORM2 does not occur in INLINEFORM3 at all; such pairs are usually called negative examples. While they cannot be used in the standard GloVe model, some authors have already reported that introducing negative examples in variants of GloVe can lead to an improvement BIBREF52 . In practice, evaluating the full objective above would not be computationally feasible, as we may need to consider millions of locations and millions of tags. Therefore, rather than considering all tags in INLINEFORM4 for the inner summation, we only consider those tags that appear at least once near location INLINEFORM5 together with a sample of negative examples.
Structured Environmental Data
There is a wide variety of structured data that can be used to describe locations. In this work, we have restricted ourselves to the same datasets as BIBREF7 . These include nine (real-valued) numerical features, which are latitude, longitude, elevation, population, and five climate related features (avg. temperature, avg. precipitation, avg. solar radiation, avg. wind speed, and avg. water vapor pressure). In addition, 180 categorical features were used, which are CORINE land cover classes at level 1 (5 classes), level 2 (15 classes) and level 3 (44 classes) and 116 soil types (SoilGrids). Note that each location should belong to exactly 4 categories: one CORINE class at each of the three levels and a soil type.
Numerical features. Numerical features can be treated similarly to the tag occurrences, i.e. we will assume that the value of a given numerical feature can be predicted from the location vectors using a linear mapping. In particular, for each numerical feature INLINEFORM0 we consider a vector INLINEFORM1 and a bias term INLINEFORM2 , and the following objective: INLINEFORM3
where we write INLINEFORM0 for set of all numerical features and INLINEFORM1 is the value of feature INLINEFORM2 for location INLINEFORM3 , after z-score normalization.
Categorical features. To take into account the categorical features, we impose the constraint that locations belonging to the same category should be close together in the space. To formalize this, we represent each category type INLINEFORM0 as a vector INLINEFORM1 , and consider the following objective: INLINEFORM2
Evaluation Tasks
We will use the method from BIBREF7 as our main baseline. This will allow us to directly evaluate the effectiveness of embeddings for the considered problem, since we have used the same structured datasets and same tag weighting scheme. For this reason, we will also follow their evaluation methodology. In particular, we will consider three evaluation tasks:
Predicting the distribution of 100 species across Europe, using the European network of nature protected sites Natura 2000 dataset as ground truth. For each of these species, a binary classification problem is considered. The set of locations INLINEFORM0 is defined as the 26,425 distinct sites occurring in the dataset.
Predicting soil type, again each time treating the task as a binary classification problem, using the same set of locations INLINEFORM0 as in the species distribution experiments. For these experiments, none of the soil type features are used for generating the embeddings.
Predicting CORINE land cover classes at levels 1, 2 and level 3, each time treating the task as a binary classification problem, using the same set of locations INLINEFORM0 as in the species distribution experiments. For these experiments, none of the CORINE features are used for generating the embeddings.
In addition, we will also consider the following regression tasks:
Predicting 5 climate related features: the average precipitation, temperature, solar radiation, water vapor pressure, and wind speed. We again use the same set of locations INLINEFORM0 as for species distribution in this experiment. None of the climate features is used for constructing the embeddings for this experiment.
Predicting people's subjective opinions of landscape beauty in Britain, using the crowdsourced dataset from the ScenicOrNot website as ground truth. The set INLINEFORM0 is chosen as the set of locations of 191 605 rated locations from the ScenicOrNot dataset for which at least one georeferenced Flickr photo exists within a 1 km radius.
Experimental Setup
In all experiments, we use Support Vector Machines (SVMs) for classification problems and Support Vector Regression (SVR) for regression problems to make predictions from our representations of geographic locations. In both cases, we used the SVM INLINEFORM0 implementation BIBREF53 . For each experiment, the set of locations INLINEFORM1 was split into two-thirds for training, one-sixth for testing, and one-sixth for tuning the parameters. All embedding models are learned with Adagrad using 30 iterations. The number of dimensions is chosen for each experiment from INLINEFORM2 based on the tuning data. For the parameters of our model in Equation EQREF7 , we considered values of INLINEFORM3 from {0.1, 0.01, 0.001, 0.0001} and values of INLINEFORM4 from {1, 10, 100, 1000, 10 000, 100 000}. To compute KL divergence, we need to determine a set of classes INLINEFORM5 for each experiment. For classification problems, we can simply consider the given categories, but for the regression problems we need to define such classes by discretizing the numerical values. For the scenicness experiments, we considered scores 3 and 7 as cut-off points, leading to three classes (i.e. less than 3, between 3 and 7, and above 7). Similarly, for each climate related features, we consider two cut-off values for discretization: 5 and 15 for average temperature, 50 and 100 for average precipitation, 10 000 and 17 000 for average solar radiation, 0.7 and 1 for average water vapor pressure, and 3 and 5 for wind speed. The smoothing parameter INLINEFORM6 was selected among INLINEFORM7 based on the tuning data. In all experiments where term selection is used, we select the top 100 000 tags. We fixed the radius INLINEFORM8 at 1km when counting the number of tag occurrences. Finally, we set the number of negative examples as 10 times the number of positive examples for each location, but with a cap at 1000 negative examples in each region for computational reasons. We tune all parameters with respect to the F1 score for the classification tasks, and Spearman INLINEFORM9 for the regression tasks.
Variants and Baseline Methods
We will refer to our model as EGEL (Embedding GEographic Locations), and will consider the following variants. EGEL-Tags only uses the information from the Flickr tags (i.e. component INLINEFORM0 ), without using any negative examples and without feature selection. EGEL-Tags+NS is similar to EGEL-Tags but with the addition of negative examples. EGEL-KL(Tags+NS) additionally considers term selection. EGEL-All is our full method, i.e. it additionally uses the structured information. We also consider the following baselines. BOW-Tags represents locations using a bag-of-words representation, using the same tag weighting as the embedding model. BOW-KL(Tags) uses the same representation but after term selection, using the same KL-based method as the embedding model. BOW-All combines the bag-of-words representation with the structured information, encoded as proposed in BIBREF7 . GloVe uses the objective from the original GloVe model for learning location vectors, i.e. this variant differs from EGEL-Tags in that instead of INLINEFORM1 we use the number of co-occurrences of tag INLINEFORM2 near location INLINEFORM3 , measured as INLINEFORM4 .
Results and Discussion
We present our results for the binary classification tasks in Tables TABREF23 – TABREF24 in terms of average precision, average recall and macro average F1 score. The results of the regression tasks are reported in Tables TABREF25 and TABREF29 in terms of the mean absolute error between the predicted and actual scores, as well as the Spearman INLINEFORM0 correlation between the rankings induced by both sets of scores. It can be clearly seen from the results that our proposed method (EGEL-All) can effectively integrate Flickr tags with the available structured information. It outperforms the baselines for all the considered tasks. Furthermore, note that the PPMI-based weighting in EGEL-Tags consistently outperforms GloVe and that both the addition of negative examples and term selection lead to further improvements. The use of term selection leads to particularly substantial improvements for the regression problems.
While our experimental results confirm the usefulness of embeddings for predicting environmental features, this is only consistently the case for the variants that use both the tags and the structured datasets. In particular, comparing BOW-Tags with EGEL-Tags, we sometimes see that the former achieves the best results. While this might seem surprising, it is in accordance with the findings in BIBREF54 , BIBREF38 , among others, where it was also found that bag-of-words representations can sometimes lead to surprisingly effective baselines. Interestingly, we note that in all cases where EGEL-KL(Tags+NS) performs worse than BOW-Tags, we also find that BOW-KL(Tags) performs worse than BOW-Tags. This suggests that for these tasks there is a very large variation in the kind of tags that can inform the prediction model, possibly including e.g. user-specific tags. Some of the information captured by such highly specific but rare tags is likely to be lost in the embedding.
To further analyze the difference in performance between BoW representations and embeddings, Figure TABREF29 compares the performance of the GloVe model with the bag-of-words model for predicting place scenicness, as a function of the number of tag occurrences at the considered locations. What is clearly noticeable in Figure TABREF29 is that GloVe performs better than the bag-of-words model for large corpora and worse for smaller corpora. This issue has been alleviated in our embedding method by the addition of negative examples.
Conclusions
In this paper, we have proposed a model to learn geographic location embeddings using Flickr tags, numerical environmental features, and categorical information. The experimental results show that our model can integrate Flickr tags with structured information in a more effective way than existing methods, leading to substantial improvements over baseline methods on various prediction tasks about the natural environment.
Acknowledgments
Shelan Jeawak has been sponsored by HCED Iraq. Steven Schockaert has been supported by ERC Starting Grant 637277.
|
what are the existing approaches?
|
BOW-Tags, BOW-KL(Tags), BOW-All, GloVe
| 4,658
|
qasper
|
8k
|
Introduction
Data annotation is a major bottleneck for the application of supervised learning approaches to many problems. As a result, unsupervised methods that learn directly from unlabeled data are increasingly important. For tasks related to unsupervised syntactic analysis, discrete generative models have dominated in recent years – for example, for both part-of-speech (POS) induction BIBREF0 , BIBREF1 and unsupervised dependency parsing BIBREF2 , BIBREF3 , BIBREF4 . While similar models have had success on a range of unsupervised tasks, they have mostly ignored the apparent utility of continuous word representations evident from supervised NLP applications BIBREF5 , BIBREF6 . In this work, we focus on leveraging and explicitly representing continuous word embeddings within unsupervised models of syntactic structure.
Pre-trained word embeddings from massive unlabeled corpora offer a compact way of injecting a prior notion of word similarity into models that would otherwise treat words as discrete, isolated categories. However, the specific properties of language captured by any particular embedding scheme can be difficult to control, and, further, may not be ideally suited to the task at hand. For example, pre-trained skip-gram embeddings BIBREF7 with small context window size are found to capture the syntactic properties of language well BIBREF8 , BIBREF9 . However, if our goal is to separate syntactic categories, this embedding space is not ideal – POS categories correspond to overlapping interspersed regions in the embedding space, evident in Figure SECREF4 .
In our approach, we propose to learn a new latent embedding space as a projection of pre-trained embeddings (depicted in Figure SECREF5 ), while jointly learning latent syntactic structure – for example, POS categories or syntactic dependencies. To this end, we introduce a new generative model (shown in Figure FIGREF6 ) that first generates a latent syntactic representation (e.g. a dependency parse) from a discrete structured prior (which we also call the “syntax model”), then, conditioned on this representation, generates a sequence of latent embedding random variables corresponding to each word, and finally produces the observed (pre-trained) word embeddings by projecting these latent vectors through a parameterized non-linear function. The latent embeddings can be jointly learned with the structured syntax model in a completely unsupervised fashion.
By choosing an invertible neural network as our non-linear projector, and then parameterizing our model in terms of the projection's inverse, we are able to derive tractable exact inference and marginal likelihood computation procedures so long as inference is tractable in the underlying syntax model. In sec:learn-with-inv we show that this derivation corresponds to an alternate view of our approach whereby we jointly learn a mapping of observed word embeddings to a new embedding space that is more suitable for the syntax model, but include an additional Jacobian regularization term to prevent information loss.
Recent work has sought to take advantage of word embeddings in unsupervised generative models with alternate approaches BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . BIBREF9 build an HMM with Gaussian emissions on observed word embeddings, but they do not attempt to learn new embeddings. BIBREF10 , BIBREF11 , and BIBREF12 extend HMM or dependency model with valence (DMV) BIBREF2 with multinomials that use word (or tag) embeddings in their parameterization. However, they do not represent the embeddings as latent variables.
In experiments, we instantiate our approach using both a Markov-structured syntax model and a tree-structured syntax model – specifically, the DMV. We evaluate on two tasks: part-of-speech (POS) induction and unsupervised dependency parsing without gold POS tags. Experimental results on the Penn Treebank BIBREF13 demonstrate that our approach improves the basic HMM and DMV by a large margin, leading to the state-of-the-art results on POS induction, and state-of-the-art results on unsupervised dependency parsing in the difficult training scenario where neither gold POS annotation nor punctuation-based constraints are available.
Model
As an illustrative example, we first present a baseline model for Markov syntactic structure (POS induction) that treats a sequence of pre-trained word embeddings as observations. Then, we propose our novel approach, again using Markov structure, that introduces latent word embedding variables and a neural projector. Lastly, we extend our approach to more general syntactic structures.
Example: Gaussian HMM
We start by describing the Gaussian hidden Markov model introduced by BIBREF9 , which is a locally normalized model with multinomial transitions and Gaussian emissions. Given a sentence of length INLINEFORM0 , we denote the latent POS tags as INLINEFORM1 , observed (pre-trained) word embeddings as INLINEFORM2 , transition parameters as INLINEFORM3 , and Gaussian emission parameters as INLINEFORM4 . The joint distribution of data and latent variables factors as:
DISPLAYFORM0
where INLINEFORM0 is the multinomial transition probability and INLINEFORM1 is the multivariate Gaussian emission probability.
While the observed word embeddings do inform this model with a notion of word similarity – lacking in the basic multinomial HMM – the Gaussian emissions may not be sufficiently flexible to separate some syntactic categories in the complex pre-trained embedding space – for example the skip-gram embedding space as visualized in Figure SECREF4 where different POS categories overlap. Next we introduce a new approach that adds flexibility to the emission distribution by incorporating new latent embedding variables.
Markov Structure with Neural Projector
To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model, we propose to cascade a neural network as a projection function, deterministically transforming the simple space defined by the Gaussian HMM to the observed embedding space. We denote the latent embedding of the INLINEFORM0 word in a sentence as INLINEFORM1 , and the neural projection function as INLINEFORM2 , parameterized by INLINEFORM3 . In the case of sequential Markov structure, our new model corresponds to the following generative process:
For each time step INLINEFORM0 ,
[noitemsep, leftmargin=*]
Draw the latent state INLINEFORM0
Draw the latent embedding INLINEFORM0
Deterministically produce embedding
INLINEFORM0
The graphical model is depicted in Figure FIGREF6 . The deterministic projection can also be viewed as sampling each observation from a point mass at INLINEFORM0 . The joint distribution of our model is: DISPLAYFORM0
where INLINEFORM0 is a conditional Gaussian distribution, and INLINEFORM1 is the Dirac delta function centered at INLINEFORM2 : DISPLAYFORM0
General Structure with Neural Projector
Our approach can be applied to a broad family of structured syntax models. We denote latent embedding variables as INLINEFORM0 , discrete latent variables in the syntax model as INLINEFORM1 ( INLINEFORM2 ), where INLINEFORM3 are conditioned to generate INLINEFORM4 . The joint probability of our model factors as:
DISPLAYFORM0
where INLINEFORM0 represents the probability of the syntax model, and can encode any syntactic structure – though, its factorization structure will determine whether inference is tractable in our full model. As shown in Figure FIGREF6 , we focus on two syntax models for syntactic analysis in this paper. The first is Markov-structured, which we use for POS induction, and the second is DMV-structured, which we use to learn dependency parses without supervision.
The marginal data likelihood of our model is: DISPLAYFORM0
While the discrete variables INLINEFORM0 can be marginalized out with dynamic program in many cases, it is generally intractable to marginalize out the latent continuous variables, INLINEFORM1 , for an arbitrary projection INLINEFORM2 in Eq. ( EQREF17 ), which means inference and learning may be difficult. In sec:opt, we address this issue by constraining INLINEFORM3 to be invertible, and show that this constraint enables tractable exact inference and marginal likelihood computation.
Learning & Inference
In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation.
Learning with Invertibility
For ease of exposition, we explain the learning algorithm in terms of Markov structure without loss of generality. As shown in Eq. ( EQREF17 ), the optimization challenge in our approach comes from the intractability of the marginalized emission factor INLINEFORM0 . If we can marginalize out INLINEFORM1 and compute INLINEFORM2 , then the posterior and marginal likelihood of our Markov-structured model can be computed with the forward-backward algorithm. We can apply Eq. ( EQREF14 ) and obtain : INLINEFORM3
By using the change of variable rule to the integration, which allows the integration variable INLINEFORM0 to be replaced by INLINEFORM1 , the marginal emission factor can be computed in closed-form when the invertibility condition is satisfied: DISPLAYFORM0
where INLINEFORM0 is a conditional Gaussian distribution, INLINEFORM1 is the Jacobian matrix of function INLINEFORM2 at INLINEFORM3 , and INLINEFORM4 represents the absolute value of its determinant. This Jacobian term is nonzero and differentiable if and only if INLINEFORM5 exists.
Eq. ( EQREF19 ) shows that we can directly calculate the marginal emission distribution INLINEFORM0 . Denote the marginal data likelihood of Gaussian HMM as INLINEFORM1 , then the log marginal data likelihood of our model can be directly written as: DISPLAYFORM0
where INLINEFORM0 represents the new sequence of embeddings after applying INLINEFORM1 to each INLINEFORM2 . Eq. ( EQREF20 ) shows that the training objective of our model is simply the Gaussian HMM log likelihood with an additional Jacobian regularization term. From this view, our approach can be seen as equivalent to reversely projecting the data through INLINEFORM3 to another manifold INLINEFORM4 that is directly modeled by the Gaussian HMM, with a regularization term. Intuitively, we optimize the reverse projection INLINEFORM5 to modify the INLINEFORM6 space, making it more appropriate for the syntax model. The Jacobian regularization term accounts for the volume expansion or contraction behavior of the projection. Maximizing it can be thought of as preventing information loss. In the extreme case, the Jacobian determinant is equal to zero, which means the projection is non-invertible and thus information is being lost through the projection. Such “information preserving” regularization is crucial during optimization, otherwise the trivial solution of always projecting data to the same single point to maximize likelihood is viable.
More generally, for an arbitrary syntax model the data likelihood of our approach is: DISPLAYFORM0
If the syntax model itself allows for tractable inference and marginal likelihood computation, the same dynamic program can be used to marginalize out INLINEFORM0 . Therefore, our joint model inherits the tractability of the underlying syntax model.
Invertible Volume-Preserving Neural Net
For the projection we can use an arbitrary invertible function, and given the representational power of neural networks they seem a natural choice. However, calculating the inverse and Jacobian of an arbitrary neural network can be difficult, as it requires that all component functions be invertible and also requires storage of large Jacobian matrices, which is memory intensive. To address this issue, several recent papers propose specially designed invertible networks that are easily trainable yet still powerful BIBREF16 , BIBREF17 , BIBREF19 . Inspired by these works, we use the invertible transformation proposed by BIBREF16 , which consists of a series of “coupling layers”. This architecture is specially designed to guarantee a unit Jacobian determinant (and thus the invertibility property).
From Eq. ( EQREF22 ) we know that only INLINEFORM0 is required for accomplishing learning and inference; we never need to explicitly construct INLINEFORM1 . Thus, we directly define the architecture of INLINEFORM2 . As shown in Figure FIGREF24 , the nonlinear transformation from the observed embedding INLINEFORM3 to INLINEFORM4 represents the first coupling layer. The input in this layer is partitioned into left and right halves of dimensions, INLINEFORM5 and INLINEFORM6 , respectively. A single coupling layer is defined as: DISPLAYFORM0
where INLINEFORM0 is the coupling function and can be any nonlinear form. This transformation satisfies INLINEFORM1 , and BIBREF16 show that its Jacobian matrix is triangular with all ones on the main diagonal. Thus the Jacobian determinant is always equal to one (i.e. volume-preserving) and the invertibility condition is naturally satisfied.
To be sufficiently expressive, we compose multiple coupling layers as suggested in BIBREF16 . Specifically, we exchange the role of left and right half vectors at each layer as shown in Figure FIGREF24 . For instance, from INLINEFORM0 to INLINEFORM1 the left subset INLINEFORM2 is unchanged, while from INLINEFORM3 to INLINEFORM4 the right subset INLINEFORM5 remains the same. Also note that composing multiple coupling layers does not change the volume-preserving and invertibility properties. Such a sequence of invertible transformations from the data space INLINEFORM6 to INLINEFORM7 is also called normalizing flow BIBREF20 .
Experiments
In this section, we first describe our datasets and experimental setup. We then instantiate our approach with Markov and DMV-structured syntax models, and report results on POS tagging and dependency grammar induction respectively. Lastly, we analyze the learned latent embeddings.
Data
For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus.
General Experimental Setup
For the neural projector, we employ rectified networks as coupling function INLINEFORM0 following BIBREF16 . We use a rectified network with an input layer, one hidden layer, and linear output units, the number of hidden units is set to the same as the number of input units. The number of coupling layers are varied as 4, 8, 16 for both tasks. We optimize marginal data likelihood directly using Adam BIBREF22 . For both tasks in the fully unsupervised setting, we do not tune the hyper-parameters using supervised data.
Unsupervised POS tagging
For unsupervised POS tagging, we use a Markov-structured syntax model in our approach, which is a popular structure for unsupervised tagging tasks BIBREF9 , BIBREF10 .
Following existing literature, we train and test on the entire WSJ corpus (49208 sentences, 1M tokens). We use 45 tag clusters, the number of POS tags that appear in WSJ corpus. We train the discrete HMM and the Gaussian HMM BIBREF9 as baselines. For the Gaussian HMM, mean vectors of Gaussian emissions are initialized with the empirical mean of all word vectors with an additive noise. We assume diagonal covariance matrix for INLINEFORM0 and initialize it with the empirical variance of the word vectors. Following BIBREF9 , the covariance matrix is fixed during training. The multinomial probabilities are initialized as INLINEFORM1 , where INLINEFORM2 . For our approach, we initialize the syntax model and Gaussian parameters with the pre-trained Gaussian HMM. The weights of layers in the rectified network are initialized from a uniform distribution with mean zero and a standard deviation of INLINEFORM3 , where INLINEFORM4 is the input dimension. We evaluate the performance of POS tagging with both Many-to-One (M-1) accuracy BIBREF23 and V-Measure (VM) BIBREF24 . Given a model we found that the tagging performance is well-correlated with the training data likelihood, thus we use training data likelihood as a unsupervised criterion to select the trained model over 10 random restarts after training 50 epochs. We repeat this process 5 times and report the mean and standard deviation of performance.
We compare our approach with basic HMM, Gaussian HMM, and several state-of-the-art systems, including sophisticated HMM variants and clustering techniques with hand-engineered features. The results are presented in Table TABREF32 . Through the introduced latent embeddings and additional neural projection, our approach improves over the Gaussian HMM by 5.4 points in M-1 and 5.6 points in VM. Neural HMM (NHMM) BIBREF10 is a baseline that also learns word representation jointly. Both their basic model and extended Conv version does not outperform the Gaussian HMM. Their best model incorporates another LSTM to model long distance dependency and breaks the Markov assumption, yet our approach still achieves substantial improvement over it without considering more context information. Moreover, our method outperforms the best published result that benefits from hand-engineered features BIBREF27 by 2.0 points on VM.
We found that most tagging errors happen in noun subcategories. Therefore, we do the one-to-one mapping between gold POS tags and induced clusters and plot the normalized confusion matrix of noun subcategories in Figure FIGREF35 . The Gaussian HMM fails to identify “NN” and “NNS” correctly for most cases, and it often recognizes “NNPS” as “NNP”. In contrast, our approach corrects these errors well.
Unsupervised Dependency Parsing without gold POS tags
For the task of unsupervised dependency parse induction, we employ the Dependency Model with Valence (DMV) BIBREF2 as the syntax model in our approach. DMV is a generative model that defines a probability distribution over dependency parse trees and syntactic categories, generating tokens and dependencies in a head-outward fashion. While, traditionally, DMV is trained using gold POS tags as observed syntactic categories, in our approach, we treat each tag as a latent variable, as described in sec:general-neural.
Most existing approaches to this task are not fully unsupervised since they rely on gold POS tags following the original experimental setup for DMV. This is partially because automatically parsing from words is difficult even when using unsupervised syntactic categories BIBREF29 . However, inducing dependencies from words alone represents a more realistic experimental condition since gold POS tags are often unavailable in practice. Previous work that has trained from words alone often requires additional linguistic constraints (like sentence internal boundaries) BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , acoustic cues BIBREF33 , additional training data BIBREF4 , or annotated data from related languages BIBREF34 . Our approach is naturally designed to train on word embeddings directly, thus we attempt to induce dependencies without using gold POS tags or other extra linguistic information.
Like previous work we use sections 02-21 of WSJ corpus as training data and evaluate on section 23, we remove punctuations and train the models on sentences of length INLINEFORM0 , “head-percolation” rules BIBREF39 are applied to obtain gold dependencies for evaluation. We train basic DMV, extended DMV (E-DMV) BIBREF35 and Gaussian DMV (which treats POS tag as unknown latent variables and generates observed word embeddings directly conditioned on them following Gaussian distribution) as baselines. Basic DMV and E-DMV are trained with Viterbi EM BIBREF40 on unsupervised POS tags induced from our Markov-structured model described in sec:pos. Multinomial parameters of the syntax model in both Gaussian DMV and our model are initialized with the pre-trained DMV baseline. Other parameters are initialized in the same way as in the POS tagging experiment. The directed dependency accuracy (DDA) is used for evaluation and we report accuracy on sentences of length INLINEFORM1 and all lengths. We train the parser until training data likelihood converges, and report the mean and standard deviation over 20 random restarts.
Our model directly observes word embeddings and does not require gold POS tags during training. Thus, results from related work trained on gold tags are not directly comparable. However, to measure how these systems might perform without gold tags, we run three recent state-of-the-art systems in our experimental setting: UR-A E-DMV BIBREF36 , Neural E-DMV BIBREF11 , and CRF Autoencoder (CRFAE) BIBREF37 . We use unsupervised POS tags (induced from our Markov-structured model) in place of gold tags. We also train basic DMV on gold tags and include several state-of-the-art results on gold tags as reference points.
As shown in Table TABREF39 , our approach is able to improve over the Gaussian DMV by 4.8 points on length INLINEFORM0 and 4.8 points on all lengths, which suggests the additional latent embedding layer and neural projector are helpful. The proposed approach yields, to the best of our knowledge, state-of-the-art performance without gold POS annotation and without sentence-internal boundary information. DMV, UR-A E-DMV, Neural E-DMV, and CRFAE suffer a large decrease in performance when trained on unsupervised tags – an effect also seen in previous work BIBREF29 , BIBREF34 . Since our approach induces latent POS tags jointly with dependency trees, it may be able to learn POS clusters that are more amenable to grammar induction than the unsupervised tags. We observe that CRFAE underperforms its gold-tag counterpart substantially. This may largely be a result of the model's reliance on prior linguistic rules that become unavailable when gold POS tag types are unknown. Many extensions to DMV can be considered orthogonal to our approach – they essentially focus on improving the syntax model. It is possible that incorporating these more sophisticated syntax models into our approach may lead to further improvements.
Sensitivity Analysis
In the above experiments we initialize the structured syntax components with the pre-trained Gaussian or discrete baseline, which is shown as a useful technique to help train our deep models. We further study the results with fully random initialization. In the POS tagging experiment, we report the results in Table TABREF48 . While the performance with 4 layers is comparable to the pre-trained Gaussian initialization, deeper projections (8 or 16 layers) result in a dramatic drop in performance. This suggests that the structured syntax model with very deep projections is difficult to train from scratch, and a simpler projection might be a good compromise in the random initialization setting.
Different from the Markov prior in POS tagging experiments, our parsing model seems to be quite sensitive to the initialization. For example, directed accuracy of our approach on sentences of length INLINEFORM0 is below 40.0 with random initialization. This is consistent with previous work that has noted the importance of careful initialization for DMV-based models such as the commonly used harmonic initializer BIBREF2 . However, it is not straightforward to apply the harmonic initializer for DMV directly in our model without using some kind of pre-training since we do not observe gold POS.
We investigate the effect of the choice of pre-trained embedding on performance while using our approach. To this end, we additionally include results using fastText embeddings BIBREF41 – which, in contrast with skip-gram embeddings, include character-level information. We set the context windows size to 1 and the dimension size to 100 as in the skip-gram training, while keeping other parameters set to their defaults. These results are summarized in Table TABREF50 and Table TABREF51 . While fastText embeddings lead to reduced performance with our model, our approach still yields an improvement over the Gaussian baseline with the new observed embeddings space.
Qualitative Analysis of Embeddings
We perform qualitative analysis to understand how the latent embeddings help induce syntactic structures. First we filter out low-frequency words and punctuations in WSJ, and visualize the rest words (10k) with t-SNE BIBREF42 under different embeddings. We assign each word with its most likely gold POS tags in WSJ and color them according to the gold POS tags.
For our Markov-structured model, we have displayed the embedding space in Figure SECREF5 , where the gold POS clusters are well-formed. Further, we present five example target words and their five nearest neighbors in terms of cosine similarity. As shown in Table TABREF53 , the skip-gram embedding captures both semantic and syntactic aspects to some degree, yet our embeddings are able to focus especially on the syntactic aspects of words, in an unsupervised fashion without using any extra morphological information.
In Figure FIGREF54 we depict the learned latent embeddings with the DMV-structured syntax model. Unlike the Markov structure, the DMV structure maps a large subset of singular and plural nouns to the same overlapping region. However, two clusters of singular and plural nouns are actually separated. We inspect the two clusters and the overlapping region in Figure FIGREF54 , it turns out that the nouns in the separated clusters are words that can appear as subjects and, therefore, for which verb agreement is important to model. In contrast, the nouns in the overlapping region are typically objects. This demonstrates that the latent embeddings are focusing on aspects of language that are specifically important for modeling dependency without ever having seen examples of dependency parses. Some previous work has deliberately created embeddings to capture different notions of similarity BIBREF43 , BIBREF44 , while they use extra morphology or dependency annotations to guide the embedding learning, our approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora.
Related Work
Our approach is related to flow-based generative models, which are first described in NICE BIBREF16 and have recently received more attention BIBREF17 , BIBREF19 , BIBREF18 . This relevant work mostly adopts simple (e.g. Gaussian) and fixed priors and does not attempt to learn interpretable latent structures. Another related generative model class is variational auto-encoders (VAEs) BIBREF45 that optimize a lower bound on the marginal data likelihood, and can be extended to learn latent structures BIBREF46 , BIBREF47 . Against the flow-based models, VAEs remove the invertibility constraint but sacrifice the merits of exact inference and exact log likelihood computation, which potentially results in optimization challenges BIBREF48 . Our approach can also be viewed in connection with generative adversarial networks (GANs) BIBREF49 that is a likelihood-free framework to learn implicit generative models. However, it is non-trivial for a gradient-based method like GANs to propagate gradients through discrete structures.
Conclusion
In this work, we define a novel generative approach to leverage continuous word representations for unsupervised learning of syntactic structure. Experiments on both POS induction and unsupervised dependency parsing tasks demonstrate the effectiveness of our proposed approach. Future work might explore more sophisticated invertible projections, or recurrent projections that jointly transform the entire input sequence.
|
What is the invertibility condition?
|
The neural projector must be invertible.
| 4,323
|
qasper
|
8k
|
Introduction
There is a recent spark of interest in the task of Question Answering (QA) over unstructured textual data, also referred to as Machine Reading Comprehension (MRC). This is mostly due to wide-spread success of advances in various facets of deep learning related research, such as novel architectures BIBREF0, BIBREF1 that allow for efficient optimisation of neural networks consisting of multiple layers, hardware designed for deep learning purposes and software frameworks BIBREF2, BIBREF3 that allow efficient development and testing of novel approaches. These factors enable researchers to produce models that are pre-trained on large scale corpora and provide contextualised word representations BIBREF4 that are shown to be a vital component towards solutions for a variety of natural language understanding tasks, including MRC BIBREF5. Another important factor that led to the recent success in MRC-related tasks is the widespread availability of various large datasets, e.g., SQuAD BIBREF6, that provide sufficient examples for optimising statistical models. The combination of these factors yields notable results, even surpassing human performance BIBREF7.
MRC is a generic task format that can be used to probe for various natural language understanding capabilities BIBREF8. Therefore it is crucially important to establish a rigorous evaluation methodology in order to be able to draw reliable conclusions from conducted experiments. While increasing effort is put into the evaluation of novel architectures, such as keeping the evaluation data from public access to prevent unintentional overfitting to test data, performing ablation and error studies and introducing novel metrics BIBREF9, surprisingly little is done to establish the quality of the data itself. Additionally, recent research arrived at worrisome findings: the data of those gold standards, which is usually gathered involving a crowd-sourcing step, suffers from flaws in design BIBREF10 or contains overly specific keywords BIBREF11. Furthermore, these gold standards contain “annotation artefacts”, cues that lead models into focusing on superficial aspects of text, such as lexical overlap and word order, instead of actual language understanding BIBREF12, BIBREF13. These weaknesses cast some doubt on whether the data can reliably evaluate the reading comprehension performance of the models they evaluate, i.e. if the models are indeed being assessed for their capability to read.
Figure FIGREF3 shows an example from HotpotQA BIBREF14, a dataset that exhibits the last kind of weakness mentioned above, i.e., the presence of unique keywords in both the question and the passage (in close proximity to the expected answer).
An evaluation methodology is vital to the fine-grained understanding of challenges associated with a single gold standard, in order to understand in greater detail which capabilities of MRC models it evaluates. More importantly, it allows to draw comparisons between multiple gold standards and between the results of respective state-of-the-art models that are evaluated on them.
In this work, we take a step back and propose a framework to systematically analyse MRC evaluation data, typically a set of questions and expected answers to be derived from accompanying passages. Concretely, we introduce a methodology to categorise the linguistic complexity of the textual data and the reasoning and potential external knowledge required to obtain the expected answer. Additionally we propose to take a closer look at the factual correctness of the expected answers, a quality dimension that appears under-explored in literature.
We demonstrate the usefulness of the proposed framework by applying it to precisely describe and compare six contemporary MRC datasets. Our findings reveal concerns about their factual correctness, the presence of lexical cues that simplify the task of reading comprehension and the lack of semantic altering grammatical modifiers. We release the raw data comprised of 300 paragraphs, questions and answers richly annotated under the proposed framework as a resource for researchers developing natural language understanding models and datasets to utilise further.
To the best of our knowledge this is the first attempt to introduce a common evaluation methodology for MRC gold standards and the first across-the-board qualitative evaluation of MRC datasets with respect to the proposed categories.
Framework for MRC Gold Standard Analysis ::: Problem definition
We define the task of machine reading comprehension, the target application of the proposed methodology as follows: Given a paragraph $P$ that consists of tokens (words) $p_1, \ldots , p_{n_P}$ and a question $Q$ that consists of tokens $q_1 \ldots q_{n_Q}$, the goal is to retrieve an answer $A$ with tokens $a_1 \ldots a_{n_A}$. $A$ is commonly constrained to be one of the following cases BIBREF15, illustrated in Figure FIGREF9:
Multiple choice, where the goal is to predict $A$ from a given set of choices $\mathcal {A}$.
Cloze-style, where $S$ is a sentence, and $A$ and $Q$ are obtained by removing a sequence of words such that $Q = S - A$. The task is to fill in the resulting gap in $Q$ with the expected answer $A$ to form $S$.
Span, where is a continuous subsequence of tokens from the paragraph ($A \subseteq P$). Flavours include multiple spans as the correct answer or $A \subseteq Q$.
Free form, where $A$ is an unconstrained natural language string.
A gold standard $G$ is composed of $m$ entries $(Q_i, A_i, P_i)_{i\in \lbrace 1,\ldots , m\rbrace }$.
The performance of an approach is established by comparing its answer predictions $A^*_{i}$ on the given input $(Q_i, T_i)$ (and $\mathcal {A}_i$ for the multiple choice setting) against the expected answer $A_i$ for all $i\in \lbrace 1,\ldots , m\rbrace $ under a performance metric. Typical performance metrics are exact match (EM) or accuracy, i.e. the percentage of exactly predicted answers, and the F1 score – the harmonic mean between the precision and the recall of the predicted tokens compared to expected answer tokens. The overall F1 score can either be computed by averaging the F1 scores for every instance or by first averaging the precision and recall and then computing the F1 score from those averages (macro F1). Free-text answers, meanwhile, are evaluated by means of text generation and summarisation metrics such as BLEU BIBREF16 or ROUGE-L BIBREF17.
Framework for MRC Gold Standard Analysis ::: Dimensions of Interest
In this section we describe a methodology to categorise gold standards according to linguistic complexity, required reasoning and background knowledge, and their factual correctness. Specifically, we use those dimensions as high-level categories of a qualitative annotation schema for annotating question, expected answer and the corresponding context. We further enrich the qualitative annotations by a metric based on lexical cues in order to approximate a lower bound for the complexity of the reading comprehension task. By sampling entries from each gold standard and annotating them, we obtain measurable results and thus are able to make observations about the challenges present in that gold standard data.
Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Problem setting
We are interested in different types of the expected answer. We differentiate between Span, where an answer is a continuous span taken from the passage, Paraphrasing, where the answer is a paraphrase of a text span, Unanswerable, where there is no answer present in the context, and Generated, if it does not fall into any of the other categories. It is not sufficient for an answer to restate the question or combine multiple Span or Paraphrasing answers to be annotated as Generated. It is worth mentioning that we focus our investigations on answerable questions. For a complementary qualitative analysis that categorises unanswerable questions, the reader is referred to Yatskar2019.
Furthermore, we mark a sentence as Supporting Fact if it contains evidence required to produce the expected answer, as they are used further in the complexity analysis.
Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Factual Correctness
An important factor for the quality of a benchmark is its factual correctness, because on the one hand, the presence of factually wrong or debatable examples introduces an upper bound for the achievable performance of models on those gold standards. On the other hand, it is hard to draw conclusions about the correctness of answers produced by a model that is evaluated on partially incorrect data.
One way by which developers of modern crowd-sourced gold standards ensure quality is by having the same entry annotated by multiple workers BIBREF18 and keeping only those with high agreement. We investigate whether this method is enough to establish a sound ground truth answer that is unambiguously correct. Concretely we annotate an answer as Debatable when the passage features multiple plausible answers, when multiple expected answers contradict each other, or an answer is not specific enough with respect to the question and a more specific answer is present. We annotate an answer as Wrong when it is factually wrong and a correct answer is present in the context.
Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Required Reasoning
It is important to understand what types of reasoning the benchmark evaluates, in order to be able to accredit various reasoning capabilities to the models it evaluates. Our proposed reasoning categories are inspired by those found in scientific question answering literature BIBREF19, BIBREF20, as research in this area focuses on understanding the required reasoning capabilities. We include reasoning about the Temporal succession of events, Spatial reasoning about directions and environment, and Causal reasoning about the cause-effect relationship between events. We further annotate (multiple-choice) answers that can only be answered By Exclusion of every other alternative.
We further extend the reasoning categories by operational logic, similar to those required in semantic parsing tasks BIBREF21, as solving those tasks typically requires “multi-hop” reasoning BIBREF14, BIBREF22. When an answer can only be obtained by combining information from different sentences joined by mentioning a common entity, concept, date, fact or event (from here on called entity), we annotate it as Bridge. We further annotate the cases, when the answer is a concrete entity that satisfies a Constraint specified in the question, when it is required to draw a Comparison of multiple entities' properties or when the expected answer is an Intersection of their properties (e.g. “What do Person A and Person B have in common?”)
We are interested in the linguistic reasoning capabilities probed by a gold standard, therefore we include the appropriate category used by Wang2019. Specifically, we annotate occurrences that require understanding of Negation, Quantifiers (such as “every”, “some”, or “all”), Conditional (“if ...then”) statements and the logical implications of Con-/Disjunction (i.e. “and” and “or”) in order to derive the expected answer.
Finally, we investigate whether arithmetic reasoning requirements emerge in MRC gold standards as this can probe for reasoning that is not evaluated by simple answer retrieval BIBREF23. To this end, we annotate the presence of of Addition and Subtraction, answers that require Ordering of numerical values, Counting and Other occurrences of simple mathematical operations.
An example can exhibit multiple forms of reasoning. Notably, we do not annotate any of the categories mentioned above if the expected answer is directly stated in the passage. For example, if the question asks “How many total points were scored in the game?” and the passage contains a sentence similar to “The total score of the game was 51 points”, it does not require any reasoning, in which case we annotate it as Retrieval.
Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Knowledge
Worthwhile knowing is whether the information presented in the context is sufficient to answer the question, as there is an increase of benchmarks deliberately designed to probe a model's reliance on some sort of background knowledge BIBREF24. We seek to categorise the type of knowledge required. Similar to Wang2019, on the one hand we annotate the reliance on factual knowledge, that is (Geo)political/Legal, Cultural/Historic, Technical/Scientific and Other Domain Specific knowledge about the world that can be expressed as a set of facts. On the other hand, we denote Intuitive knowledge requirements, which is challenging to express as a set of facts, such as the knowledge that a parenthetic numerical expression next to a person's name in a biography usually denotes his life span.
Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Linguistic Complexity
Another dimension of interest is the evaluation of various linguistic capabilities of MRC models BIBREF25, BIBREF26, BIBREF27. We aim to establish which linguistic phenomena are probed by gold standards and to which degree. To that end, we draw inspiration from the annotation schema used by Wang2019, and adapt it around lexical semantics and syntax.
More specifically, we annotate features that introduce variance between the supporting facts and the question. With regard to lexical semantics, we focus on the use of redundant words that do not alter the meaning of a sentence for the task of retrieving the expected answer (Redundancy), requirements on the understanding of words' semantic fields (Lexical Entailment) and the use of Synonyms and Paraphrases with respect to the question wording. Furthermore we annotate cases where supporting facts contain Abbreviations of concepts introduced in the question (and vice versa) and when a Dative case substitutes the use of a preposition (e.g. “I bought her a gift” vs “I bought a gift for her”). Regarding syntax, we annotate changes from passive to active Voice, the substitution of a Genitive case with a preposition (e.g. “of”) and changes from nominal to verbal style and vice versa (Nominalisation).
We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions.
Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Complexity
Finally, we want to approximate the presence of lexical cues that might simplify the reading required in order to arrive at the answer. Quantifying this allows for more reliable statements about and comparison of the complexity of gold standards, particularly regarding the evaluation of comprehension that goes beyond simple lexical matching. We propose the use of coarse metrics based on lexical overlap between question and context sentences. Intuitively, we aim to quantify how much supporting facts “stand out” from their surrounding passage context. This can be used as proxy for the capability to retrieve the answer BIBREF10. Specifically, we measure (i) the number of words jointly occurring in a question and a sentence, (ii) the length of the longest n-gram shared by question and sentence and (iii) whether a word or n-gram from the question uniquely appears in a sentence.
The resulting taxonomy of the framework is shown in Figure FIGREF10. The full catalogue of features, their description, detailed annotation guideline as well as illustrating examples can be found in Appendix .
Application of the Framework ::: Candidate Datasets
We select contemporary MRC benchmarks to represent all four commonly used problem definitions BIBREF15. In selecting relevant datasets, we do not consider those that are considered “solved”, i.e. where the state of the art performance surpasses human performance, as is the case with SQuAD BIBREF28, BIBREF7. Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\ year) \times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4 We randomly draw one from each bucket and add two randomly drawn datasets from the candidate pool. This leaves us with the datasets described in Table TABREF19. For a more detailed description, we refer to Appendix .
Application of the Framework ::: Annotation Task
We randomly select 50 distinct question, answer and passage triples from the publicly available development sets of the described datasets. Training, development and the (hidden) test set are drawn from the same distribution defined by the data collection method of the respective dataset. For those collections that contain multiple questions over a single passage, we ensure that we are sampling unique paragraphs in order to increase the variety of investigated texts.
The samples were annotated by the first author of this paper, using the proposed schema. In order to validate our findings, we further take 20% of the annotated samples and present them to a second annotator (second author). Since at its core, the annotation is a multi-label task, we report the inter-annotator agreement by computing the (micro-averaged) F1 score, where we treat the first annotator's labels as gold. Table TABREF21 reports the agreement scores, the overall (micro) average F1 score of the annotations is 0.82, which means that on average, more than two thirds of the overall annotated labels were agreed on by both annotators. We deem this satisfactory, given the complexity of the annotation schema.
Application of the Framework ::: Qualitative Analysis
We present a concise view of the annotation results in Figure FIGREF23. The full annotation results can be found in Appendix . We centre our discussion around the following main points:
Application of the Framework ::: Qualitative Analysis ::: Linguistic Features
As observed in Figure FIGREF23 the gold standards feature a high degree of Redundancy, peaking at 76% of the annotated HotpotQA samples and synonyms and paraphrases (labelled Synonym), with ReCoRd samples containing 58% of them, likely to be attributed to the elaborating type of discourse of the dataset sources (encyclopedia and newswire). This is, however, not surprising, as it is fairly well understood in the literature that current state-of-the-art models perform well on distinguishing relevant words and phrases from redundant ones BIBREF32. Additionally, the representational capability of synonym relationships of word embeddings has been investigated and is well known BIBREF33. Finally, we observe the presence of syntactic features, such as ambiguous relative clauses, appositions and adverbial phrases, (RelAdvApp 40% in HotpotQA and ReCoRd) and those introducing variance, concretely switching between verbal and nominal styles (e.g. Nominalisation 10% in HotpotQA) and from passive to active voice (Voice, 8% in HotpotQA).
Syntactic features contributing to variety and ambiguity that we did not observe in our samples are the exploitation of verb symmetry, the use of dative and genitive cases or ambiguous prepositions and coordination scope (respectively Symmetry, Dative, Genitive, Prepositions, Scope). Therefore we cannot establish whether models are capable of dealing with those features by evaluating them on those gold standards.
Application of the Framework ::: Qualitative Analysis ::: Factual Correctness
We identify three common sources that surface in different problems regarding an answer's factual correctness, as reported in Figure FIGREF23 and illustrate their instantiations in Table TABREF31:
Design Constraints: Choosing the task design and the data collection method introduces some constraints that lead to factually debatable examples. For example, a span might have been arbitrarily selected from multiple spans that potentially answer a question, but only a single continuous answer span per question is allowed by design, as observed in the NewsQA and MsMarco samples (32% and 34% examples annotated as Debatable with 16% and 53% thereof exhibiting arbitrary selection, respectively). Sometimes, when additional passages are added after the annotation step, they can by chance contain passages that answer the question more precisely than the original span, as seen in HotpotQA (16% Debatable samples, 25% of them due to arbitrary selection). In the case of MultiRC it appears to be inconsistent, whether multiple correct answer choices are expected to be correct in isolation or in conjunction (28% Debatable with 29% of them exhibiting this problem). This might provide an explanation to its relatively weak human baseline performance of 84% F1 score BIBREF31.
Weak Quality assurance: When the (typically crowd-sourced) annotations are not appropriately validated, incorrect examples will find their way into the gold standards. This typically results in factually wrong expected answers (i.e. when a more correct answer is present in the context) or a question is expected to be Unanswerable, but is actually answerable from the provided context. The latter is observed in MsMarco (83% of examples annotated as Wrong) and NewsQA, where 60% of the examples annotated as Wrong are Unanswerable with an answer present.
Arbitrary Precision: There appears to be no clear guideline on how precise the answer is expected to be, when the passage expresses the answer in varying granularities. We annotated instances as Debatable when the expected answer was not the most precise given the context (44% and 29% of Debatable instances in NewsQA and MultiRC, respectively).
Application of the Framework ::: Qualitative Analysis ::: Semantics-altering grammatical modifiers
We took interest in whether any of the benchmarks contain what we call distracting lexical features (or distractors): grammatical modifiers that alter the semantics of a sentence for the final task of answering the given question while preserving a similar lexical form. An example of such features are cues for (double) Negation (e.g., “no”, “not”), which when introduced in a sentence, reverse its meaning. Other examples include modifiers denoting Restrictivity, Factivity and Reasoning (such as Monotonicity and Conditional cues). Examples of question-answer pairs containing a distractor are shown in Table FIGREF37.
We posit that the presence of such distractors would allow for evaluating reading comprehension beyond potential simple word matching. However, we observe no presence of such features in the benchmarks (beyond Negation in DROP, ReCoRd and HotpotQA, with 4%, 4% and 2% respectively). This results in gold standards that clearly express the evidence required to obtain the answer, lacking more challenging, i.e., distracting, sentences that can assess whether a model can truly understand meaning.
Application of the Framework ::: Qualitative Analysis ::: Other
In the Figure FIGREF23 we observe that Operational and Arithmetic reasoning moderately (6% to 8% combined) appears “in the wild”, i.e. when not enforced by the data design as is the case with HotpotQA (80% Operations combined) or DROP (68% Arithmetic combined). Causal reasoning is (exclusively) present in MultiRC (32%), whereas Temporal and Spatial reasoning requirements seem to not naturally emerge in gold standards. In ReCoRd, a fraction of 38% questions can only be answered By Exclusion of every other candidate, due to the design choice of allowing questions where the required information to answer them is not fully expressed in the accompanying paragraph.
Therefore, it is also a little surprising to observe that ReCoRd requires external resources with regard to knowledge, as seen in Figure FIGREF23. MultiRC requires technical or more precisely basic scientific knowledge (6% Technical/Scientific), as a portion of paragraphs is extracted from elementary school science textbooks BIBREF31. Other benchmarks moderately probe for factual knowledge (0% to 4% across all categories), while Intuitive knowledge is required to derive answers in each gold standard.
It is also worth pointing out, as done in Figure FIGREF23, that although MultiRC and MsMarco are not modelled as a span selection problem, their samples still contain 50% and 66% of answers that are directly taken from the context. DROP contains the biggest fraction of generated answers (60%), due to the requirement of arithmetic operations.
To conclude our analysis, we observe similar distributions of linguistic features and reasoning patterns, except where there are constraints enforced by dataset design, annotation guidelines or source text choice. Furthermore, careful consideration of design choices (such as single-span answers) is required, to avoid impairing the factual correctness of datasets, as pure crowd-worker agreement seems not sufficient in multiple cases.
Application of the Framework ::: Quantitative Results ::: Lexical overlap
We used the scores assigned by our proposed set of metrics (discussed in Section SECREF11 Dimensions of Interest: Complexity) to predict the supporting facts in the gold standard samples (that we included in our manual annotation). Concretely, we used the following five features capturing lexical overlap: (i) the number of words occurring in sentence and question, (ii) the length of the longest n-gram shared by sentence and question, whether a (iii) uni- and (iv) bigram from the question is unique to a sentence, and (v) the sentence index, as input to a logistic regression classifier. We optimised on each sample leaving one example for evaluation. We compute the average Precision, Recall and F1 score by means of leave-one-out validation with every sample entry. The averaged results after 5 runs are reported in Table TABREF41.
We observe that even by using only our five features based lexical overlap, the simple logistic regression baseline is able to separate out the supporting facts from the context to a varying degree. This is in line with the lack of semantics-altering grammatical modifiers discussed in the qualitative analysis section above. The classifier performs best on DROP (66% F1) and MultiRC (40% F1), which means that lexical cues can considerably facilitate the search for the answer in those gold standards. On MultiRC, yadav2019quick come to a similar conclusion, by using a more sophisticated approach based on overlap between question, sentence and answer choices.
Surprisingly, the classifier is able to pick up a signal from supporting facts even on data that has been pruned against lexical overlap heuristics by populating the context with additional documents that have high overlap scores with the question. This results in significantly higher scores than when guessing randomly (HotpotQA 26% F1, and MsMarco 11% F1). We observe similar results in the case the length of the question leaves few candidates to compute overlap with $6.3$ and $7.3$ tokens on average for MsMarco and NewsQA (26% F1), compared to $16.9$ tokens on average for the remaining four dataset samples.
Finally, it is worth mentioning that although the queries in ReCoRd are explicitly independent from the passage, the linear classifier is still capable of achieving 34% F1 score in predicting the supporting facts.
However, neural networks perform significantly better than our admittedly crude baseline (e.g. 66% F1 for supporting facts classification on HotpotQA BIBREF14), albeit utilising more training examples, and a richer sentence representation. This facts implies that those neural models are capable of solving more challenging problems than simple “text matching” as performed by the logistic regression baseline. However, they still circumvent actual reading comprehension as the respective gold standards are of limited suitability to evaluate this BIBREF34, BIBREF35. This suggests an exciting future research direction, that is categorising the scale between text matching and reading comprehension more precisely and respectively positioning state-of-the-art models thereon.
Related Work
Although not as prominent as the research on novel architecture, there has been steady progress in critically investigating the data and evaluation aspects of NLP and machine learning in general and MRC in particular.
Related Work ::: Adversarial Evaluation
The authors of the AddSent algorithm BIBREF11 show that MRC models trained and evaluated on the SQuAD dataset pay too little attention to details that might change the semantics of a sentence, and propose a crowd-sourcing based method to generate adversary examples to exploit that weakness. This method was further adapted to be fully automated BIBREF36 and applied to different gold standards BIBREF35. Our proposed approach differs in that we aim to provide qualitative justifications for those quantitatively measured issues.
Related Work ::: Sanity Baselines
Another line of research establishes sane baselines to provide more meaningful context to the raw performance scores of evaluated models. When removing integral parts of the task formulation such as question, the textual passage or parts thereof BIBREF37 or restricting model complexity by design in order to suppress some required form of reasoning BIBREF38, models are still able to perform comparably to the state-of-the-art. This raises concerns about the perceived benchmark complexity and is related to our work in a broader sense as one of our goals is to estimate the complexity of benchmarks.
Related Work ::: Benchmark evaluation in NLP
Beyond MRC, efforts similar to ours that pursue the goal of analysing the evaluation of established datasets exist in Natural Language Inference BIBREF13, BIBREF12. Their analyses reveal the existence of biases in training and evaluation data that can be approximated with simple majority-based heuristics. Because of these biases, trained models fail to extract the semantics that are required for the correct inference. Furthermore, a fair share of work was done to reveal gender bias in coreference resolution datasets and models BIBREF39, BIBREF40, BIBREF41.
Related Work ::: Annotation Taxonomies
Finally, related to our framework are works that introduce annotation categories for gold standards evaluation. Concretely, we build our annotation framework around linguistic features that were introduced in the GLUE suite BIBREF42 and the reasoning categories introduced in the WorldTree dataset BIBREF19. A qualitative analysis complementary to ours, with focus on the unanswerability patterns in datasets that feature unanswerable questions was done by Yatskar2019.
Conclusion
In this paper, we introduce a novel framework to characterise machine reading comprehension gold standards. This framework has potential applications when comparing different gold standards, considering the design choices for a new gold standard and performing qualitative error analyses for a proposed approach.
Furthermore we applied the framework to analyse popular state-of-the-art gold standards for machine reading comprehension: We reveal issues with their factual correctness, show the presence of lexical cues and we observe that semantics-altering grammatical modifiers are missing in all of the investigated gold standards. Studying how to introduce those modifiers into gold standards and observing whether state-of-the-art MRC models are capable of performing reading comprehension on text containing them, is a future research goal.
A future line of research is to extend the framework to be able to identify the different types of exploitable cues such as question or entity typing and concrete overlap patterns. This will allow the framework to serve as an interpretable estimate of reading comprehension complexity of gold standards. Finally, investigating gold standards under this framework where MRC models outperform the human baseline (e.g. SQuAD) will contribute to a deeper understanding of the seemingly superb performance of deep learning approaches on them.
|
How does proposed qualitative annotation schema looks like?
|
The resulting taxonomy of the framework is shown in Figure FIGREF10
| 4,958
|
qasper
|
8k
|
Introduction
Speech-to-Text translation (ST) is essential for a wide range of scenarios: for example in emergency calls, where agents have to respond emergent requests in a foreign language BIBREF0; or in online courses, where audiences and speakers use different languages BIBREF1. To tackle this problem, existing approaches can be categorized into cascaded method BIBREF2, BIBREF3, where a machine translation (MT) model translates outputs of an automatic speech recognition (ASR) system into target language, and end-to-end method BIBREF4, BIBREF5, where a single model learns acoustic frames to target word sequence mappings in one step towards the final objective of interest. Although the cascaded model remains the dominant approach due to its better performance, the end-to-end method becomes more and more popular because it has lower latency by avoiding inferences with two models and rectifies the error propagation in theory.
Since it is hard to obtain a large-scale ST dataset, multi-task learning BIBREF5, BIBREF6 and pre-training techniques BIBREF7 have been applied to end-to-end ST model to leverage large-scale datasets of ASR and MT. A common practice is to pre-train two encoder-decoder models for ASR and MT respectively, and then initialize the ST model with the encoder of the ASR model and the decoder of the MT model. Subsequently, the ST model is optimized with the multi-task learning by weighing the losses of ASR, MT, and ST. This approach, however, causes a huge gap between pre-training and fine-tuning, which are summarized into three folds:
Subnet Waste: The ST system just reuses the ASR encoder and the MT decoder, while discards other pre-trained subnets, such as the MT encoder. Consequently, valuable semantic information captured by the MT encoder cannot be inherited by the final ST system.
Role Mismatch: The speech encoder plays different roles in pre-training and fine-tuning. The encoder is a pure acoustic model in pre-training, while it has to extract semantic and linguistic features additionally in fine-tuning, which significantly increases the learning difficulty.
Non-pre-trained Attention Module: Previous work BIBREF6 trains attention modules for ASR, MT and ST respectively, hence, the attention module of ST does not benefit from the pre-training.
To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN), which is able to reuse all subnets in pre-training, keep the roles of subnets consistent, and pre-train the attention module. Concretely, the TCEN consists of three components, a speech encoder, a text encoder, and a target text decoder. Different from the previous work that pre-trains an encoder-decoder based ASR model, we only pre-train an ASR encoder by optimizing the Connectionist Temporal Classification (CTC) BIBREF8 objective function. In this way, the additional decoder of ASR is not required while keeping the ability to read acoustic features into the source language space by the speech encoder. Besides, the text encoder and decoder can be pre-trained on a large MT dataset. After that, we employ common used multi-task learning method to jointly learn ASR, MT and ST tasks.
Compared to prior works, the encoder of TCEN is a concatenation of an ASR encoder and an MT encoder and our model does not have an ASR decoder, so the subnet waste issue is solved. Furthermore, the two encoders work at tandem, disentangling acoustic feature extraction and linguistic feature extraction, ensuring the role consistency between pre-training and fine-tuning. Moreover, we reuse the pre-trained MT attention module in ST, so we can leverage the alignment information learned in pre-training.
Since the text encoder consumes word embeddings of plausible texts in MT task but uses speech encoder outputs in ST task, another question is how one guarantees the speech encoder outputs are consistent with the word embeddings. We further modify our model to achieve semantic consistency and length consistency. Specifically, (1) the projection matrix at the CTC classification layer for ASR is shared with the word embedding matrix, ensuring that they are mapped to the same latent space, and (2) the length of the speech encoder output is proportional to the length of the input frame, so it is much longer than a natural sentence. To bridge the length gap, source sentences in MT are lengthened by adding word repetitions and blank tokens to mimic the CTC output sequences.
We conduct comprehensive experiments on the IWSLT18 speech translation benchmark BIBREF1, demonstrating the effectiveness of each component. Our model is significantly better than previous methods by 3.6 and 2.2 BLEU scores for the subword-level decoding and character-level decoding strategies, respectively.
Our contributions are three-folds: 1) we shed light on why previous ST models cannot sufficiently utilize the knowledge learned from the pre-training process; 2) we propose a new ST model, which alleviates shortcomings in existing methods; and 3) we empirically evaluate the proposed model on a large-scale public dataset.
Background ::: Problem Formulation
End-to-end speech translation aims to translate a piece of audio into a target-language translation in one step. The raw speech signals are usually converted to sequences of acoustic features, e.g. Mel filterbank features. Here, we define the speech feature sequence as $\mathbf {x} = (x_1, \cdots , x_{T_x})$.The transcription and translation sequences are denoted as $\mathbf {y^{s}} = (y_1^{s}, \cdots , y_{T_s}^{s})$, and $\mathbf {y^{t}} = (y_1^{t}, \cdots , y_{T_t}^{t})$ repectively. Each symbol in $\mathbf {y^{s}}$ or $\mathbf {y^{t}}$ is an integer index of the symbol in a vocabulary $V_{src}$ or $V_{trg}$ respectively (e.g. $y^s_i=k, k\in [0, |V_{src}|-1]$). In this work, we suppose that an ASR dataset, an MT dataset, and a ST dataset are available, denoted as $\mathcal {A} = \lbrace (\mathbf {x_i}, \mathbf {y^{s}_i})\rbrace _{i=0}^I$, $\mathcal {M} =\lbrace (\mathbf {y^{s}_j}, \mathbf {y^{t}_j})\rbrace _{j=0}^J$ and $ \mathcal {S} =\lbrace (\mathbf {x_l}, \mathbf {y^{t}_l})\rbrace _{l=0}^L$ respectively. Given a new piece of audio $\mathbf {x}$, our goal is to learn an end to end model to generate a translation sentence $\mathbf {y^{t}}$ without generating an intermediate result $\mathbf {y^{s}}$.
Background ::: Multi-Task Learning and Pre-training for ST
To leverage large scale ASR and MT data, multi-task learning and pre-training techniques are widely employed to improve the ST system. As shown in Figure FIGREF4, there are three popular multi-task strategies for ST, including 1) one-to-many setting, in which a speech encoder is shared between ASR and ST tasks; 2) many-to-one setting in which a decoder is shared between MT and ST tasks; and 3) many-to-many setting where both the encoder and decoder are shared.
A many-to-many multi-task model contains two encoders as well as two decoders. It can be jointly trained on ASR, MT, and ST tasks. As the attention module is task-specific, three attentions are defined.
Usually, the size of $\mathcal {A}$ and $\mathcal {M}$ is much larger than $\mathcal {S}$. Therefore, the common training practice is to pre-train the model on ASR and MT tasks and then fine-tune it with a multi-task learning manner. However, as aforementioned, this method suffers from subnet waste, role mismatch and non-pre-trained attention issues, which severely limits the end-to-end ST performance.
Our method
In this section, we first introduce the architecture of TCEN, which consists of two encoders connected in tandem, and one decoder with an attention module. Then we give the pre-training and fine-tuning strategy for TCEN. Finally, we propose our solutions for semantic and length inconsistency problems, which are caused by multi-task learning.
Our method ::: TCEN Architecture
Figure FIGREF5 sketches the overall architecture of TCEN, including a speech encoder $enc_s$, a text encoder $enc_t$ and a decoder $dec$ with an attention module $att$. During training, the $enc_s$ acts like an acoustic model which reads the input $\mathbf {x}$ to word or subword representations $\mathbf {h^s}$, then $enc_t$ learns high-level linguistic knowledge into hidden representations $\mathbf {h^t}$. Finally, the $dec$ defines a distribution probability over target words. The advantage of our architecture is that two encoders disentangle acoustic feature extraction and linguistic feature extraction, making sure that valuable knowledge learned from ASR and MT tasks can be effectively leveraged for ST training. Besides, every module in pre-training can be utilized in fine-tuning, alleviating the subnet waste problem.
Follow BIBREF9 inaguma2018speech, we use CNN-BiLSTM architecture to build our model. Specifically, the input features $\mathbf {x}$ are organized as a sequence of feature vectors in length $T_x$. Then, $\mathbf {x}$ is passed into a stack of two convolutional layers followed by max-pooling:
where $\mathbf {v}^{(l-1)}$ is feature maps in last layer and $\mathbf {W}^{(l)}$ is the filter. The max-pooling layers downsample the sequence in length by a total factor of four. The down-sampled feature sequence is further fed into a stack of five bidirectional $d$-dimensional LSTM layers:
where $[;]$ denotes the vector concatenation. The final output representation from the speech encoder is denoted as $\mathbf {h^s}=(h^s_1, \cdots , h^s_{\frac{T_x}{4}})$, where $h_i^s \in \mathbb {R}^d$.
The text encoder $enc_t$ consists of two bidirectional LSTM layers. In ST task, $enc_t$ accepts speech encoder output $\mathbf {h}^s$ as input. While in MT, $enc_t$ consumes the word embedding representation $\mathbf {e^s}$ derived from $\mathbf {y^s}$, where each element $e^s_i$ is computed by choosing the $y_i^s$-th vector from the source embedding matrix $W_{E^s}$. The goal of $enc_t$ is to extract high-level linguistic features like syntactic features or semantic features from lower level subword representations $\mathbf {h}^s$ or $\mathbf {e}^s$. Since $\mathbf {h}^s$ and $\mathbf {e}^s$ belong to different latent space and have different lengths, there remain semantic and length inconsistency problems. We will provide our solutions in Section SECREF21. The output sequence of $enc_t$ is denoted as $\mathbf {h}^t$.
The decoder is defined as two unidirectional LSTM layers with an additive attention $att$. It predicts target sequence $\mathbf {y^{t}}$ by estimating conditional probability $P(\mathbf {y^{t}}|\mathbf {x})$:
Here, $z_k$ is the the hidden state of the deocder RNN at $k$ step and $c_k$ is a time-dependent context vector computed by the attention $att$.
Our method ::: Training Procedure
Following previous work, we split the training procedure to pre-training and fine-tuning stages. In pre-training stage, the speech encoder $enc_s$ is trained towards CTC objective using dataset $\mathcal {A}$, while the text encoder $enc_t$ and the decoder $dec$ are trained on MT dataset $\mathcal {M}$. In fine-tuning stage, we jointly train the model on ASR, MT, and ST tasks.
Our method ::: Training Procedure ::: Pre-training
To sufficiently utilize the large dataset $\mathcal {A}$ and $\mathcal {M}$, the model is pre-trained on CTC-based ASR task and MT task in the pre-training stage.
For ASR task, in order to get rid of the requirement for decoder and enable the $enc_s$ to generate subword representation, we leverage connectionist temporal classification (CTC) BIBREF8 loss to train the speech encoder.
Given an input $\mathbf {x}$, $enc_s$ emits a sequence of hidden vectors $\mathbf {h^s}$, then a softmax classification layer predicts a CTC path $\mathbf {\pi }$, where $\pi _t \in V_{src} \cup $ {`-'} is the observing label at particular RNN step $t$, and `-' is the blank token representing no observed labels:
where $W_{ctc} \in \mathbb {R}^{d \times (|V_{src}|+1)}$ is the weight matrix in the classification layer and $T$ is the total length of encoder RNN.
A legal CTC path $\mathbf {\pi }$ is a variation of the source transcription $\mathbf {y}^s$ by allowing occurrences of blank tokens and repetitions, as shown in Table TABREF14. For each transcription $\mathbf {y}$, there exist many legal CTC paths in length $T$. The CTC objective trains the model to maximize the probability of observing the golden sequence $\mathbf {y}^s$, which is calculated by summing the probabilities of all possible legal paths:
where $\Phi _T(y)$ is the set of all legal CTC paths for sequence $\mathbf {y}$ with length $T$. The loss can be easily computed using forward-backward algorithm. More details about CTC are provided in supplementary material.
For MT task, we use the cross-entropy loss as the training objective. During training, $\mathbf {y^s}$ is converted to embedding vectors $\mathbf {e^s}$ through embedding layer $W_{E^s}$, then $enc_t$ consumes $\mathbf {e^s}$ and pass the output $\mathbf {h^t}$ to decoder. The objective function is defined as:
Our method ::: Training Procedure ::: Fine-tune
In fine-tune stage, we jointly update the model on ASR, MT, and ST tasks. The training for ASR and MT follows the same process as it was in pre-training stage.
For ST task, the $enc_s$ reads the input $\mathbf {x}$ and generates $\mathbf {h^s}$, then $enc_t$ learns high-level linguistic knowledge into $\mathbf {h^t}$. Finally, the $dec$ predicts the target sentence. The ST loss function is defined as:
Following the update strategy proposed by BIBREF11 luong2015multi, we allocate a different training ratio $\alpha _i$ for each task. When switching between tasks, we select randomly a new task $i$ with probability $\frac{\alpha _i}{\sum _{j}\alpha _{j}}$.
Our method ::: Subnet-Consistency
Our model keeps role consistency between pre-training and fine-tuning by connecting two encoders for ST task. However, this leads to some new problems: 1) The text encoder consumes $\mathbf {e^s}$ during MT training, while it accepts $\mathbf {h^s}$ during ST training. However, $\mathbf {e^s}$ and $\mathbf {h^s}$ may not follow the same distribution, resulting in the semantic inconsistency. 2) Besides, the length of $\mathbf {h^s}$ is not the same order of magnitude with the length of $\mathbf {e^s}$, resulting in the length inconsistency.
In response to the above two challenges, we propose two countermeasures: 1) We share weights between CTC classification layer and source-end word embedding layer during training of ASR and MT, encouraging $\mathbf {e^s}$ and $\mathbf {h^s}$ in the same space. 2)We feed the text encoder source sentences in the format of CTC path, which are generated from a seq2seq model, making it more robust toward long inputs.
Our method ::: Subnet-Consistency ::: Semantic Consistency
As shown in Figure FIGREF5, during multi-task training, two different hidden features will be fed into the text encoder $enc_t$: the embedding representation $\mathbf {e}^s$ in MT task, and the $enc_s$ output $\mathbf {h^s}$ in ST task. Without any regularization, they may belong to different latent spaces. Due to the space gap, the $enc_t$ has to compromise between two tasks, limiting its performance on individual tasks.
To bridge the space gap, our idea is to pull $\mathbf {h^s}$ into the latent space where $\mathbf {e}^s$ belong. Specifically, we share the weight $W_{ctc}$ in CTC classification layer with the source embedding weights $W_{E^s}$, which means $W_{ctc} = W_{E^s}$. In this way, when predicting the CTC path $\mathbf {\pi }$, the probability of observing the particular label $w_i \in V_{src}\cup ${`-'} at time step $t$, $p(\pi _t=w_i|\mathbf {x})$, is computed by normalizing the product of hidden vector $h_t^s$ and the $i$-th vector in $W_{E^s}$:
The loss function closes the distance between $h^s_t$ and golden embedding vector, encouraging $\mathbf {h}^s$ have the same distribution with $\mathbf {e}^s$.
Our method ::: Subnet-Consistency ::: Length Consistency
Another existing problem is length inconsistency. The length of the sequence $\mathbf {h^s}$ is proportional to the length of the input frame $\mathbf {x}$, which is much longer than the length of $\mathbf {e^s}$. To solve this problem, we train an RNN-based seq2seq model to transform normal source sentences to noisy sentences in CTC path format, and replace standard MT with denoising MT for multi-tasking.
Specifically, we first train a CTC ASR model based on dataset $\mathcal {A} = \lbrace (\mathbf {x}_i, \mathbf {y}^s_i)\rbrace _{i=0}^{I}$, and generate a CTC-path $\mathbf {\pi }_i$ for each audio $\mathbf {x}_i$ by greedy decoding. Then we define an operation $S(\cdot )$, which converts a CTC path $\mathbf {\pi }$ to a sequence of the unique tokens $\mathbf {u}$ and a sequence of repetition times for each token $\mathbf {l}$, denoted as $S(\mathbf {\pi }) = (\mathbf {u}, \mathbf {l})$. Notably, the operation is reversible, meaning that $S^{-1} (\mathbf {u}, \mathbf {l})=\mathbf {\pi }$. We use the example $\mathbf {\pi _1}$ in Table TABREF14 and show the corresponding $\mathbf {u}$ and $\mathbf {l}$ in Table TABREF24.
Then we build a dataset $\mathcal {P} = \lbrace (\mathbf {y^s}_i, \mathbf {u}_i, \mathbf {l}_i)\rbrace _{i=0}^{I}$ by decoding all the audio pieces in $\mathcal {A}$ and transform the resulting path by the operation $S(\cdot )$. After that, we train a seq2seq model, as shown in Figure FIGREF25, which takes $ \mathbf {y^s}_i$ as input and decodes $\mathbf {u}_i, \mathbf {l}_i$ as outputs. With the seq2seq model, a noisy MT dataset $\mathcal {M}^{\prime }=\lbrace (\mathbf {\pi }_l, \mathbf {y^t}_l)\rbrace _{l=0}^{L}$ is obtained by converting every source sentence $\mathbf {y^s}_i \in \mathcal {M}$ to $\mathbf {\pi _i}$, where $\mathbf {\pi }_i = S^{-1}(\mathbf {u}_i, \mathbf {l}_i)$. We did not use the standard seq2seq model which takes $\mathbf {y^s}$ as input and generates $\mathbf {\pi }$ directly, since there are too many blank tokens `-' in $\mathbf {\pi }$ and the model tends to generate a long sequence with only blank tokens. During MT training, we randomly sample text pairs from $\mathcal {M}^{\prime }$ and $\mathcal {M}$ according to a hyper-parameter $k$. After tuning on the validation set, about $30\%$ pairs are sampled from $\mathcal {M}^{\prime }$. In this way, the $enc_t$ is more robust toward the longer inputs given by the $enc_s$.
Experiments
We conduct experiments on the IWSLT18 speech translation task BIBREF1. Since IWSLT participators use different data pre-processing methods, we reproduce several competitive baselines based on the ESPnet BIBREF12 for a fair comparison.
Experiments ::: Dataset ::: Speech translation data:
The organizer provides a speech translation corpus extracting from the TED talk (ST-TED), which consists of raw English wave files, English transcriptions, and aligned German translations. The corpus contains 272 hours of English speech with 171k segments. We split 2k segments from the corpus as dev set and tst2010, tst2013, tst2014, tst2015 are used as test sets.
Speech recognition data: Aside from ST-TED, TED-LIUM2 corpus BIBREF13 is provided as speech recognition data, which contains 207 hours of English speech and 93k transcript sentences.
Text translation data: We use transcription and translation pairs in the ST-TED corpus and WIT3 as in-domain MT data, which contains 130k and 200k sentence pairs respectively. WMT2018 is used as out-of-domain training data which consists of 41M sentence pairs.
Data preprocessing: For speech data, the utterances are segmented into multiple frames with a 25 ms window size and a 10 ms step size. Then we extract 80-channel log-Mel filter bank and 3-dimensional pitch features using Kaldi BIBREF14, resulting in 83-dimensional input features. We normalize them by the mean and the standard deviation on the whole training set. The utterances with more than 3000 frames are discarded. The transcripts in ST-TED are in true-case with punctuation while in TED-LIUM2, transcripts are in lower-case and unpunctuated. Thus, we lowercase all the sentences and remove the punctuation to keep consistent. To increase the amount of training data, we perform speed perturbation on the raw signals with speed factors 0.9 and 1.1. For the text translation data, sentences longer than 80 words or shorter than 10 words are removed. Besides, we discard pairs whose length ratio between source and target sentence is smaller than 0.5 or larger than 2.0. Word tokenization is performed using the Moses scripts and both English and German words are in lower-case.
We use two different sets of vocabulary for our experiments. For the subword experiments, both English and German vocabularies are generated using sentencepiece BIBREF15 with a fixed size of 5k tokens. BIBREF9 inaguma2018speech show that increasing the vocabulary size is not helpful for ST task. For the character experiments, both English and German sentences are represented in the character level.
For evaluation, we segment each audio with the LIUM SpkDiarization tool BIBREF16 and then perform MWER segmentation with RWTH toolkit BIBREF17. We use lowercase BLEU as evaluation metric.
Experiments ::: Baseline Models and Implementation
We compare our method with following baselines.
Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.
Pre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.
Multi-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\alpha _{st}=0.75$ while $\alpha _{asr}=0.25$ or $\alpha _{mt}=0.25$. For many-to-many setting, we use $\alpha _{st}=0.6, \alpha _{asr}=0.2$ and $\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.
Many-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner.
All our baselines as well as TCEN are implemented based on ESPnet BIBREF12, the RNN size is set as $d=1024$ for all models. We use a dropout of 0.3 for embeddings and encoders, and train using Adadelta with initial learning rate of 1.0 for a maximum of 10 epochs.
For training of TCEN, we set $\alpha _{asr}=0.2$ and $\alpha _{mt}=0.8$ in the pre-training stage, since the MT dataset is much larger than ASR dataset. For fine-tune, we use $\alpha _{st}=0.6, \alpha _{asr}=0.2$ and $\alpha _{mt}=0.2$, same as the `many-to-many' baseline.
For testing, we select the model with the best accuracy on speech translation task on dev set. At inference time, we use a beam size of 10, and the beam scores include length normalization with a weight of 0.2.
Experiments ::: Experimental Results
Table TABREF29 shows the results on four test sets as well as the average performance. Our method significantly outperforms the strong `many-to-many+pretrain' baseline by 3.6 and 2.2 BLEU scores respectively, indicating the proposed method is very effective that substantially improves the translation quality. Besides, both pre-training and multi-task learning can improve translation quality, and the pre-training settings (2nd-4th rows) are more effective compared to multi-task settings (5th-8th rows). We observe a performance degradation in the `triangle+pretrain' baseline. Compared to our method, where the decoder receives higher-level syntactic and semantic linguistic knowledge extracted from text encoder, their ASR decoder can only provide lower word-level linguistic information. Besides, since their model lacks text encoder and the architecture of ST decoder is different from MT decoder, their model cannot utilize the large-scale MT data in all the training stages. Interestingly, we find that the char-level models outperform the subword-level models in all settings, especially in vanilla baseline. A similar phenomenon is observed by BIBREF6 berard2018end. A possible explanation is that learning the alignments between speech frames and subword units in another language is notoriously difficult. Our method can bring more gains in the subword setting since our model is good at learning the text-to-text alignment and the subword-level alignment is more helpful to the translation quality.
Experiments ::: Discussion ::: Ablation Study
To better understand the contribution of each component, we perform an ablation study on subword-level experiments. The results are shown in Table TABREF37. In `-MT noise' setting, we do not add noise to source sentences for MT. In `-weight sharing' setting, we use different parameters in CTC classification layer and source embedding layer. These two experiments prove that both weight sharing and using noisy MT input benefit to the final translation quality. Performance degrades more in `-weight sharing', indicating the semantic consistency contributes more to our model. In the `-pretrain' experiment, we remove the pre-training stage and directly update the model on three tasks, leading to a dramatic decrease on BLEU score, indicating the pre-training is an indispensable step for end-to-end ST.
Experiments ::: Discussion ::: Learning Curve
It is interesting to investigate why our method is superior to baselines. We find that TCEN achieves a higher final result owing to a better start-point in fine-tuning. Figure FIGREF39 provides learning curves of subword accuracy on validation set. The x-axis denotes the fine-tuning training steps. The vanilla model starts at a low accuracy, because its networks are not pre-trained on the ASR and MT data. The trends of our model and `many-to-many+pretrain' are similar, but our model outperforms it about five points in the whole fine-tuning process. It indicates that the gain comes from bridging the gap between pre-training and fine-tuning rather than a better fine-tuning process.
Experiments ::: Discussion ::: Compared with a Cascaded System
Table TABREF29 compares our model with end-to-end baselines. Here, we compare our model with cascaded systems. We build a cascaded system by combining the ASR model and MT model used in pre-training baseline. Word error rate (WER) of the ASR system and BLEU score of the MT system are reported in the supplementary material. In addition to a simple combination of the ASR and MT systems, we also re-segment the ASR outputs before feeding to the MT system, denoted as cascaded+re-seg. Specifically, we train a seq2seq model BIBREF19 on the MT dataset, where the source side is a no punctuation sentence and the target side is a natural sentence. After that, we use the seq2seq model to add sentence boundaries and punctuation on ASR outputs. Experimental results are shown in Table TABREF41. Our end-to-end model outperforms the simple cascaded model over 2 BLEU scores, and it achieves a comparable performance with the cascaded model combining with a sentence re-segment model.
Related Work
Early works conduct speech translation in a pipeline manner BIBREF2, BIBREF20, where the ASR output lattices are fed into an MT system to generate target sentences. HMM BIBREF21, DenseNet BIBREF22, TDNN BIBREF23 are commonly used ASR systems, while RNN with attention BIBREF19 and Transformer BIBREF10 are top choices for MT. To enhance the robustness of the NMT model towards ASR errors, BIBREF24 DBLP:conf/eacl/TsvetkovMD14 and BIBREF25 DBLP:conf/asru/ChenHHL17 propose to simulate the noise in training and inference.
To avoid error propagation and high latency issues, recent works propose translating the acoustic speech into text in target language without yielding the source transcription BIBREF4. Since ST data is scarce, pre-training BIBREF7, multi-task learning BIBREF4, BIBREF6, curriculum learning BIBREF26, attention-passing BIBREF27, and knowledge distillation BIBREF28, BIBREF29 strategies have been explored to utilize ASR data and MT data. Specifically, BIBREF5 DBLP:conf/interspeech/WeissCJWC17 show improvements of performance by training the ST model jointly with the ASR and the MT model. BIBREF6 berard2018end observe faster convergence and better results due to pre-training and multi-task learning on a larger dataset. BIBREF7 DBLP:conf/naacl/BansalKLLG19 show that pre-training a speech encoder on one language can improve ST quality on a different source language. All of them follow the traditional multi-task training strategies. BIBREF26 DBLP:journals/corr/abs-1802-06003 propose to use curriculum learning to improve ST performance on syntactically distant language pairs. To effectively leverage transcriptions in ST data, BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 augment the multi-task model where the target decoder receives information from the source decoder and they show improvements on low-resource speech translation. Their model just consumes ASR and ST data, in contrast, our work sufficiently utilizes the large-scale MT data to capture the rich semantic knowledge. BIBREF30 DBLP:conf/icassp/JiaJMWCCALW19 use pre-trained MT and text-to-speech (TTS) synthesis models to convert weakly supervised data into ST pairs and demonstrate that an end-to-end MT model can be trained using only synthesised data.
Conclusion
This paper has investigated the end-to-end method for ST. It has discussed why there is a huge gap between pre-training and fine-tuning in previous methods. To alleviate these issues, we have proposed a method, which is capable of reusing every sub-net and keeping the role of sub-net consistent between pre-training and fine-tuning. Empirical studies have demonstrated that our model significantly outperforms baselines.
|
What are the baselines?
|
Vanilla ST baseline, encoder pre-training, in which the ST encoder is initialized from an ASR model, decoder pre-training, in which the ST decoder is initialized from an MT model, encoder-decoder pre-training, where both the encoder and decoder are pre-trained, many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models, Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation
| 4,704
|
qasper
|
8k
|
Introduction
The challenges of imbalanced classification—in which the proportion of elements in each class for a classification task significantly differ—and of the ability to generalise on dissimilar data have remained important problems in Natural Language Processing (NLP) and Machine Learning in general. Popular NLP tasks including sentiment analysis, propaganda detection, and event extraction from social media are all examples of imbalanced classification problems. In each case the number of elements in one of the classes (e.g. negative sentiment, propagandistic content, or specific events discussed on social media, respectively) is significantly lower than the number of elements in the other classes.
The recently introduced BERT language model for transfer learning BIBREF0 uses a deep bidirectional transformer architecture to produce pre-trained context-dependent embeddings. It has proven to be powerful in solving many NLP tasks and, as we find, also appears to handle imbalanced classification well, thus removing the need to use standard methods of data augmentation to mitigate this problem (see Section SECREF11 for related work and Section SECREF16 for analysis).
BERT is credited with the ability to adapt to many tasks and data with very little training BIBREF0. However, we show that BERT fails to perform well when the training and test data are significantly dissimilar, as is the case with several tasks that deal with social and news data. In these cases, the training data is necessarily a subset of past data, while the model is likely to be used on future data which deals with different topics. This work addresses this problem by incorporating cost-sensitivity (Section SECREF19) into BERT.
We test these methods by participating in the Shared Task on Fine-Grained Propaganda Detection for the 2nd Workshop on NLP for Internet Freedom, for which we achieve the second rank on sentence-level classification of propaganda, confirming the importance of cost-sensitivity when the training and test sets are dissimilar.
Introduction ::: Detecting Propaganda
The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends" BIBREF2.
For the philosopher and sociologist Jacques Ellul, however, in a society with mass communication, propaganda is inevitable and thus it is necessary to become more aware of it BIBREF3; but whether or not to classify a given strip of text as propaganda depends not just on its content but on its use on the part of both addressers and addressees BIBREF1, and this fact makes the automated detection of propaganda intrinsically challenging.
Despite this difficulty, interest in automatically detecting misinformation and/or propaganda has gained significance due to the exponential growth in online sources of information combined with the speed with which information is shared today. The sheer volume of social interactions makes it impossible to manually check the veracity of all information being shared. Automation thus remains a potentially viable method of ensuring that we continue to enjoy the benefits of a connected world without the spread of misinformation through either ignorance or malicious intent.
In the task introduced by BIBREF4, we are provided with articles tagged as propaganda at the sentence and fragment (or span) level and are tasked with making predictions on a development set followed by a final held-out test set. We note this gives us access to the articles in the development and test sets but not their labels.
We participated in this task under the team name ProperGander and were placed 2nd on the sentence level classification task where we make use of our methods of incorporating cost-sensitivity into BERT. We also participated in the fragment level task and were placed 7th. The significant contributions of this work are:
We show that common (`easy') methods of data augmentation for dealing with class imbalance do not improve base BERT performance.
We provide a statistical method of establishing the similarity of datasets.
We incorporate cost-sensitivity into BERT to enable models to adapt to dissimilar datasets.
We release all our program code on GitHub and Google Colaboratory, so that other researchers can benefit from this work.
Related work ::: Propaganda detection
Most of the existing works on propaganda detection focus on identifying propaganda at the news article level, or even at the news outlet level with the assumption that each of the articles of the suspected propagandistic outlet are propaganda BIBREF5, BIBREF6.
Here we study two tasks that are more fine-grained, specifically propaganda detection at the sentence and phrase (fragment) levels BIBREF4. This fine-grained setup aims to train models that identify linguistic propaganda techniques rather than distinguishing between the article source styles.
BIBREF4 EMNLP19DaSanMartino were the first to propose this problem setup and release it as a shared task. Along with the released dataset, BIBREF4 proposed a multi-granularity neural network, which uses the deep bidirectional transformer architecture known as BERT, which features pre-trained context-dependent embeddings BIBREF0. Their system takes a joint learning approach to the sentence- and phrase-level tasks, concatenating the output representation of the less granular (sentence-level) task with the more fine-grained task using learned weights.
In this work we also take the BERT model as the basis of our approach and focus on the class imbalance as well as the lack of similarity between training and test data inherent to the task.
Related work ::: Class imbalance
A common issue for many Natural Language Processing (NLP) classification tasks is class imbalance, the situation where one of the class categories comprises a significantly larger proportion of the dataset than the other classes. It is especially prominent in real-world datasets and complicates classification when the identification of the minority class is of specific importance.
Models trained on the basis of minimising errors for imbalanced datasets tend to more frequently predict the majority class; achieving high accuracy in such cases can be misleading. Because of this, the macro-averaged F-score, chosen for this competition, is a more suitable metric as it weights the performance on each class equally.
As class imbalance is a widespread issue, multiple techniques have been developed that help alleviate it BIBREF7, BIBREF8, by either adjusting the model (e.g. changing the performance metric) or changing the data (e.g. oversampling the minority class or undersampling the majority class).
Related work ::: Class imbalance ::: Cost-sensitive learning
Cost-sensitive classification can be used when the “cost” of mislabelling one class is higher than that of mislabelling other classes BIBREF9, BIBREF10. For example, the real cost to a bank of miscategorising a large fraudulent transaction as authentic is potentially higher than miscategorising (perhaps only temporarily) a valid transaction as fraudulent. Cost-sensitive learning tackles the issue of class imbalance by changing the cost function of the model such that misclassification of training examples from the minority class carries more weight and is thus more `expensive'. This is achieved by simply multiplying the loss of each example by a certain factor. This cost-sensitive learning technique takes misclassification costs into account during model training, and does not modify the imbalanced data distribution directly.
Related work ::: Class imbalance ::: Data augmentation
Common methods that tackle the problem of class imbalance by modifying the data to create balanced datasets are undersampling and oversampling. Undersampling randomly removes instances from the majority class and is only suitable for problems with an abundance of data. Oversampling means creating more minority class instances to match the size of the majority class. Oversampling methods range from simple random oversampling, i.e. repeating the training procedure on instances from the minority class, chosen at random, to the more complex, which involves constructing synthetic minority-class samples. Random oversampling is similar to cost-sensitive learning as repeating the sample several times makes the cost of its mis-classification grow proportionally. Kolomiyets et al. kolomiyets2011model, Zhang et al. zhang2015character, and Wang and Yang wang2015s perform data augmentation using synonym replacement, i.e. replacing random words in sentences with their synonyms or nearest-neighbor embeddings, and show its effectiveness on multiple tasks and datasets. Wei et al. wei2019eda provide a great overview of `easy' data augmentation (EDA) techniques for NLP, including synonym replacement as described above, and random deletion, i.e. removing words in the sentence at random with pre-defined probability. They show the effectiveness of EDA across five text classification tasks. However, they mention that EDA may not lead to substantial improvements when using pre-trained models. In this work we test this claim by comparing performance gains of using cost-sensitive learning versus two data augmentation methods, synonym replacement and random deletion, with a pre-trained BERT model.
More complex augmentation methods include back-translation BIBREF11, translational data augmentation BIBREF12, and noising BIBREF13, but these are out of the scope of this study.
Dataset
The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sentences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences. In the training set, 4,720 ($\sim 28\%$) of the sentences have been assessed as containing propaganda, with 12,245 sentences ($\sim 72 \%$) as non-propaganda, demonstrating a clear class imbalance.
In the binary sentence-level classification (SLC) task, a model is trained to detect whether each and every sentence is either 'propaganda' or 'non-propaganda'; in the more challenging field-level classification (FLC) task, a model is trained to detect one of 18 possible propaganda technique types in spans of characters within sentences. These propaganda types are listed in BIBREF4 and range from those which might be recognisable at the lexical level (e.g. Name_Calling, Repetition), and those which would likely need to incorporate semantic understanding (Red_Herring, Straw_Man).
For several example sentences from a sample document annotated with fragment-level classifications (FLC) (Figure FIGREF13). The corresponding sentence-level classification (SLC) labels would indicate that sentences 3, 4, and 7 are 'propaganda' while the the other sentences are `non-propaganda'.
Dataset ::: Data Distribution
One of the most interesting aspects of the data provided for this task is the notable difference between the training and the development/test sets. We emphasise that this difference is realistic and reflective of real world news data, in which major stories are often accompanied by the introduction of new terms, names, and even phrases. This is because the training data is a subset of past data while the model is to be used on future data which deals with different newsworthy topics.
We demonstrate this difference statistically by using a method for finding the similarity of corpora suggested by BIBREF14. We use the Wilcoxon signed-rank test BIBREF15 which compares the frequency counts of randomly sampled elements from different datasets to determine if those datasets have a statistically similar distribution of elements.
We implement this as follows. For each of the training, development and test sets, we extract all words (retaining the repeats) while ignoring a set of stopwords (identified through the Python Natural Language Toolkit). We then extract 10,000 samples (with replacements) for various pairs of these datasets (training, development, and test sets along with splits of each of these datasets). Finally, we use comparative word frequencies from the two sets to calculate the p-value using the Wilcoxon signed-rank test. Table TABREF15 provides the minimum and maximum p-values and their interpretations for ten such runs of each pair reported.
With p-value less than 0.05, we show that the train, development and test sets are self-similar and also significantly different from each other. In measuring self-similarity, we split each dataset after shuffling all sentences. While this comparison is made at the sentence level (as opposed to the article level), it is consistent with the granularity used for propaganda detection, which is also at the sentence level. We also perform measurements of self similarity after splitting the data at the article level and find that the conclusions of similarity between the sets hold with a p-value threshold of 0.001, where p-values for similarity between the training and dev/test sets are orders of magnitude lower compared to self-similarity. Since we use random sampling we run this test 10 times and present the both the maximum and minimum p-values. We include the similarity between 25% of a dataset and the remaining 75% of that set because that is the train/test ratio we use in our experiments, further described in our methodology (Section SECREF4).
This analysis shows that while all splits of each of the datasets are statistically similar, the training set (and the split of the training set that we use for experimentation) are significantly different from the development and test sets. While our analysis does show that the development and the test sets are dissimilar, we note (based on the p-values) that they are significantly more similar to each other than they are to the training set.
Methodology
We were provided with two tasks: (1) propaganda fragment-level identification (FLC) and (2) propagandistic sentence-level identification (SLC). While we develop systems for both tasks, our main focus is toward the latter. Given the differences between the training, development, and test sets, we focus on methods for generalising our models. We note that propaganda identification is, in general, an imbalanced binary classification problem as most sentences are not propagandistic.
Due to the non-deterministic nature of fast GPU computations, we run each of our models three times and report the average of these three runs through the rest of this section. When picking the model to use for our final submission, we pick the model that performs best on the development set.
When testing our models, we split the labelled training data into two non-overlapping parts: the first one, consisting of 75% of the training data is used to train models, whereas the other is used to test the effectiveness of the models. All models are trained and tested on the same split to ensure comparability. Similarly, to ensure that our models remain comparable, we continue to train on the same 75% of the training set even when testing on the development set.
Once the best model is found using these methods, we train that model on all of the training data available before then submitting the results on the development set to the leaderboard. These results are detailed in the section describing our results (Section SECREF5).
Methodology ::: Class Imbalance in Sentence Level Classification
The sentence level classification task is an imbalanced binary classification problem that we address using BERT BIBREF0. We use BERTBASE, uncased, which consists of 12 self-attention layers, and returns a 768-dimension vector that representation a sentence. So as to make use of BERT for sentence classification, we include a fully connected layer on top of the BERT self-attention layers, which classifies the sentence embedding provided by BERT into the two classes of interest (propaganda or non-propaganda).
We attempt to exploit various data augmentation techniques to address the problem of class imbalance. Table TABREF17 shows the results of our experiments for different data augmentation techniques when, after shuffling the training data, we train the model on 75% of the training data and test it on the remaining 25% of the training data and the development data.
We observe that BERT without augmentation consistently outperforms BERT with augmentation in the experiments when the model is trained on 75% of the training data and evaluated on the rest, i.e trained and evaluated on similar data, coming from the same distribution. This is consistent with observations by Wei et al. wei2019eda that contextual word embeddings do not gain from data augmentation. The fact that we shuffle the training data prior to splitting it into training and testing subsets could imply that the model is learning to associate topic words, such as `Mueller', as propaganda. However, when we perform model evaluation using the development set, which is dissimilar to the training, we observe that synonym insertion and word dropping techniques also do not bring performance gains, while random oversampling increases performance over base BERT by 4%. Synonym insertion provides results very similar to base BERT, while random deletion harms model performance producing lower scores. We believe that this could be attributed to the fact that synonym insertion and random word dropping involve the introduction of noise to the data, while oversampling does not. As we are working with natural language data, this type of noise can in fact change the meaning of the sentence. Oversampling on the other hand purely increases the importance of the minority class by repeating training on the unchanged instances.
So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class.
So far we have been able to establish that a) the training and test sets are dissimilar, thus requiring us to generalise our model, b) oversampling provides a method of generalisation, and c) oversampling does this while maintaining recall on the minority (and thus more interesting) class.
Given this we explore alternative methods of increasing minority class recall without a significant drop in precision. One such method is cost-sensitive classification, which differs from random oversampling in that it provides a more continuous-valued and consistent method of weighting samples of imbalanced training data; for example, random oversampling will inevitably emphasise some training instances at the expense of others. We detail our methods of using cost-sensitive classification in the next section. Further experiments with oversampling might have provided insights into the relationships between these methods, which we leave for future exploration.
Methodology ::: Cost-sensitive Classification
As discussed in Section SECREF10, cost-sensitive classification can be performed by weighting the cost function. We increase the weight of incorrectly labelling a propagandistic sentence by altering the cost function of the training of the final fully connected layer of our model previously described in Section SECREF16. We make these changes through the use of PyTorch BIBREF16 which calculates the cross-entropy loss for a single prediction $x$, an array where the $j^{th}$ element represents the models prediction for class $j$, labelled with the class $class$ as given by Equation DISPLAY_FORM20.
The cross-entropy loss given in Equation DISPLAY_FORM20 is modified to accommodate an array $weight$, the $i^{th}$ element of which represents the weight of the $i^{th}$ class, as described in Equation DISPLAY_FORM21.
Intuitively, we increase the cost of getting the classification of an “important” class wrong and corresponding decrees the cost of getting a less important class wrong. In our case, we increase the cost of mislabelling the minority class which is “propaganda” (as opposed to “non-propaganda”).
We expect the effect of this to be similar to that of oversampling, in that it is likely to enable us to increase the recall of the minority class thus resulting in the decrease in recall of the overall model while maintaining high precision. We reiterate that this specific change to a model results in increasing the model's ability to better identify elements belonging to the minority class in dissimilar datasets when using BERT.
We explore the validity of this by performing several experiments with different weights assigned to the minority class. We note that in our experiments use significantly higher weights than the weights proportional to class frequencies in the training data, that are common in literature BIBREF17. Rather than directly using the class proportions of the training set, we show that tuning weights based on performance on the development set is more beneficial. Figure FIGREF22 shows the results of these experiments wherein we are able to maintain the precision on the subset of the training set used for testing while reducing its recall and thus generalising the model. The fact that the model is generalising on a dissimilar dataset is confirmed by the increase in the development set F1 score. We note that the gains are not infinite and that a balance must be struck based on the amount of generalisation and the corresponding loss in accuracy. The exact weight to use for the best transfer of classification accuracy is related to the dissimilarity of that other dataset and hence is to be obtained experimentally through hyperparameter search. Our experiments showed that a value of 4 is best suited for this task.
We do not include the complete results of our experiments here due to space constraints but include them along with charts and program code on our project website. Based on this exploration we find that the best weights for this particular dataset are 1 for non-propaganda and 4 for propaganda and we use this to train the final model used to submit results to the leaderboard. We also found that adding Part of Speech tags and Named Entity information to BERT embeddings by concatenating these one-hot vectors to the BERT embeddings does not improve model performance. We describe these results in Section SECREF5.
Methodology ::: Fragment-level classification (FLC)
In addition to participating in the Sentence Level Classification task we also participate in the Fragment Level Classification task. We note that extracting fragments that are propagandistic is similar to the task of Named Entity Recognition, in that they are both span extraction tasks, and so use a BERT based model designed for this task - We build on the work by BIBREF18 which makes use of Continuous Random Field stacked on top of an LSTM to predict spans. This architecture is standard amongst state of the art models that perform span identification.
While the same span of text cannot have multiple named entity labels, it can have different propaganda labels. We get around this problem by picking one of the labels at random. Additionally, so as to speed up training, we only train our model on those sentences that contain some propagandistic fragment. In hindsight, we note that both these decisions were not ideal and discuss what we might have otherwise done in Section SECREF7.
Results
In this section, we show our rankings on the leaderboard on the test set. Unlike the previous exploratory sections, in which we trained our model on part of the training set, we train models described in this section on the complete training set.
Results ::: Results on the SLC task
Our best performing model, selected on the basis of a systematic analysis of the relationship between cost weights and recall, places us second amongst the 25 teams that submitted their results on this task. We present our score on the test set alongside those of comparable teams in Table TABREF25. We note that the task description paper BIBREF4 describes a method of achieving an F1 score of 60.98% on a similar task although this reported score is not directly comparable to the results on this task because of the differences in testing sets.
Results ::: Results on the FLC task
We train the model described in Section SECREF23 on the complete training set before submitting to the leaderboard. Our best performing model was placed 7th amongst the 13 teams that submitted results for this task. We present our score on the test set alongside those of comparable teams in Table TABREF27. We note that the task description paper BIBREF4 describes a method of achieving an F1 score of 22.58% on a similar task although, this reported score is not directly comparable to the results on this task.
One of the major setbacks to our method for identifying sentence fragments was the loss of training data as a result of randomly picking one label when the same fragment had multiple labels. This could have been avoided by training different models for each label and simply concatenating the results. Additionally, training on all sentences, including those that did not contain any fragments labelled as propagandistic would have likely improved our model performance. We intend to perform these experiments as part of our ongoing research.
Issues of Decontextualization in Automated Propaganda Detection
It is worth reflecting on the nature of the shared task dataset (PTC corpus) and its structural correspondence (or lack thereof) to some of the definitions of propaganda mentioned in the introduction. First, propaganda is a social phenomenon and takes place as an act of communication BIBREF19, and so it is more than a simple information-theoretic message of zeros and ones—it also incorporates an addresser and addressee(s), each in phatic contact (typically via broadcast media), ideally with a shared denotational code and contextual surround(s) BIBREF20.
As such, a dataset of decontextualised documents with labelled sentences, devoid of authorial or publisher metadata, has taken us at some remove from even a simple everyday definition of propaganda. Our models for this shared task cannot easily incorporate information about the addresser or addressee; are left to assume a shared denotational code between author and reader (one perhaps simulated with the use of pre-trained word embeddings); and they are unaware of when or where the act(s) of propagandistic communication took place. This slipperiness is illustrated in our example document (Fig. FIGREF13): note that while Sentences 3 and 7, labelled as propaganda, reflect a propagandistic attitude on the part of the journalist and/or publisher, Sentence 4—also labelled as propaganda in the training data—instead reflects a “flag-waving" propagandistic attitude on the part of U.S. congressman Jeff Flake, via the conventions of reported speech BIBREF21. While reported speech often is signaled by specific morphosyntactic patterns (e.g. the use of double-quotes and “Flake said") BIBREF22, we argue that human readers routinely distinguish propagandistic reportage from the propagandastic speech acts of its subjects, and to conflate these categories in a propaganda detection corpus may contribute to the occurrence of false positives/negatives.
Conclusions and Future Work
In this work we have presented a method of incorporating cost-sensitivity into BERT to allow for better generalisation and additionally, we provide a simple measure of corpus similarity to determine when this method is likely to be useful. We intend to extend our analysis of the ability to generalise models to less similar data by experimenting on other datasets and models. We hope that the release of program code and documentation will allow the research community to help in this experimentation while exploiting these methods.
Acknowledgements
We would like to thank Dr Leandro Minku from the University of Birmingham for his insights into and help with the statistical analysis presented in this paper.
This work was also partially supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. Work by Elena Kochkina was partially supported by the Leverhulme Trust through the Bridges Programme and Warwick CDT for Urban Science & Progress under the EPSRC Grant Number EP/L016400/1.
|
Which natural language(s) are studied in this paper?
|
Unanswerable
| 4,513
|
qasper
|
8k
|
Introduction
The irony is a kind of figurative language, which is widely used on social media BIBREF0 . The irony is defined as a clash between the intended meaning of a sentence and its literal meaning BIBREF1 . As an important aspect of language, irony plays an essential role in sentiment analysis BIBREF2 , BIBREF0 and opinion mining BIBREF3 , BIBREF4 .
Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is “I hate it when my plans get ruined" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as “I like it when my plans get ruined". The speaker uses “like" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence “I hate to be ignored", we train our model to generate an ironic sentence such as “I love to be ignored". Although there is “love" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation.
Recently, unsupervised style transfer becomes a very popular topic. Many state-of-the-art studies try to solve the task with sequence-to-sequence (seq2seq) framework. There are three main ways to build up models. The first is to learn a latent style-independent content representation and generate sentences with the content representation and another style BIBREF6 , BIBREF7 . The second is to directly transfer sentences from one style to another under the control of classifiers and reinforcement learning BIBREF8 . The third is to remove style attribute words from the input sentence and combine the remaining content with new style attribute words BIBREF9 , BIBREF10 . The first method usually obtains better performances via adversarial training with discriminators. The style-independent content representation, nevertheless, is not easily obtained BIBREF11 , which results in poor performances. The second method is suitable for complex styles which are difficult to model and describe. The model can learn the deep semantic features by itself but sometimes the model is sensitive to parameters and hard to train. The third method succeeds to preserve content but cannot work for some complex styles such as democratic and republican. Sentences with those styles usually do not have specific style attribute words. Unfortunately, due to the lack of large irony dataset and difficulties of modeling ironies, there has been little work trying to generate ironies based on seq2seq framework as far as we know. Inspired by methods for style transfer, we decide to implement a specifically designed model based on unsupervised style transfer to explore irony generation.
In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. Then, due to the lack of parallel data, we propose a novel model to transfer non-ironic sentences to ironic sentences in an unsupervised way. As ironic style is hard to model and describe, we implement our model with the control of classifiers and reinforcement learning. Different from other studies in style transfer, the transformation from non-ironic to ironic sentences has to preserve sentiment polarity as mentioned above. Therefore, we not only design an irony reward to control the irony accuracy and implement denoising auto-encoder and back-translation to control content preservation but also design a sentiment reward to control sentiment preservation.
Experimental results demonstrate that our model achieves a high irony accuracy with well-preserved sentiment and content. The contributions of our work are as follows:
Related Work
Style Transfer: As irony is a complicated style and hard to model with some specific style attribute words, we mainly focus on studies without editing style attribute words.
Some studies are trying to disentangle style representation from content representation. In BIBREF12 , authors leverage adversarial networks to learn separate content representations and style representations. In BIBREF13 and BIBREF6 , researchers combine variational auto-encoders (VAEs) with style discriminators.
However, some recent studies BIBREF11 reveal that the disentanglement of content and style representations may not be achieved in practice. Therefore, some other research studies BIBREF9 , BIBREF10 strive to separate content and style by removing stylistic words. Nonetheless, many non-ironic sentences do not have specific stylistic words and as a result, we find it difficult to transfer non-ironic sentences to ironic sentences through this way in practice.
Besides, some other research studies do not disentangle style from content but directly learn representations of sentences. In BIBREF8 , authors propose a dual reinforcement learning framework without separating content and style representations. In BIBREF7 , researchers utilize a machine translation model to learn a sentence representation preserving the meaning of the sentence but reducing stylistic properties. In this method, the quality of generated sentences relies on the performance of classifiers to a large extent. Meanwhile, such models are usually sensitive to parameters and difficult to train. In contrast, we combine a pre-training process with reinforcement learning to build up a stable language model and design special rewards for our task.
Irony Detection: With the development of social media, irony detection becomes a more important task. Methods for irony detection can be mainly divided into two categories: methods based on feature engineering and methods based on neural networks.
As for methods based on feature engineering, In BIBREF1 , authors investigate pragmatic phenomena and various irony markers. In BIBREF14 , researchers leverage a combination of sentiment, distributional semantic and text surface features. Those models rely on hand-crafted features and are hard to implement.
When it comes to methods based on neural networks, long short-term memory (LSTM) BIBREF15 network is widely used and is very efficient for irony detection. In BIBREF16 , a tweet is divided into two segments and a subtract layer is implemented to calculate the difference between two segments in order to determine whether the tweet is ironic. In BIBREF17 , authors utilize a recurrent neural network with Bi-LSTM and self-attention without hand-crafted features. In BIBREF18 , researchers propose a system based on a densely connected LSTM network.
Our Dataset
In this section, we describe how we build our dataset with tweets. First, we crawl over 2M tweets from twitter using GetOldTweets-python. We crawl English tweets from 04/09/2012 to /12/18/2018. We first remove all re-tweets and use langdetect to remove all non-English sentences. Then, we remove hashtags attached at the end of the tweets because they are usually not parts of sentences and will confuse our language model. After that, we utilize Ekphrasis to process tweets. We remove URLs and restore remaining hashtags, elongated words, repeated words, and all-capitalized words. To simplify our dataset, We replace all “ INLINEFORM0 money INLINEFORM1 " and “ INLINEFORM2 time INLINEFORM3 " tokens with “ INLINEFORM4 number INLINEFORM5 " token when using Ekphrasis. And we delete sentences whose lengths are less than 10 or greater than 40. In order to restore abbreviations, we download an abbreviation dictionary from webopedia and restore abbreviations to normal words or phrases according to the dictionary. Finally, we remove sentences which have more than two rare words (appearing less than three times) in order to constrain the size of vocabulary. Finally, we get 662,530 sentences after pre-processing.
As neural networks are proved effective in irony detection, we decide to implement a neural classifier in order to classify the sentences into ironic and non-ironic sentences. However, the only high-quality irony dataset we can obtain is the dataset of Semeval-2018 Task 3 and the dataset is pretty small, which will cause overfitting to complex models. Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection. We train the model with the dataset of Semeval-2018 Task 3. After classification, we get 262,755 ironic sentences and 399,775 non-ironic sentences. According to our observation, not all non-ironic sentences are suitable to be transferred into ironic sentences. For example, “just hanging out . watching . is it monday yet" is hard to transfer because it does not have an explicit sentiment polarity. So we remove all interrogative sentences from the non-ironic sentences and only obtain the sentences which have words expressing strong sentiments. We evaluate the sentiment polarity of each word with TextBlob and we view those words with sentiment scores greater than 0.5 or less than -0.5 as words expressing strong sentiments. Finally, we build our irony dataset with 262,755 ironic sentences and 102,330 non-ironic sentences.
[t] Irony Generation Algorithm
INLINEFORM0 pre-train with auto-encoder Pre-train INLINEFORM1 , INLINEFORM2 with INLINEFORM3 using MLE based on Eq. EQREF16 Pre-train INLINEFORM4 , INLINEFORM5 with INLINEFORM6 using MLE based on Eq. EQREF17 INLINEFORM7 pre-train with back-translation Pre-train INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , INLINEFORM11 with INLINEFORM12 using MLE based on Eq. EQREF19 Pre-train INLINEFORM13 , INLINEFORM14 , INLINEFORM15 , INLINEFORM16 with INLINEFORM17 using MLE based on Eq. EQREF20
INLINEFORM0 train with RL each epoch e = 1, 2, ..., INLINEFORM1 INLINEFORM2 train non-irony2irony with RL INLINEFORM3 in N INLINEFORM4 update INLINEFORM5 , INLINEFORM6 , using INLINEFORM7 based on Eq. EQREF29 INLINEFORM8 back-translation INLINEFORM9 INLINEFORM10 INLINEFORM11 update INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 using MLE based on Eq. EQREF19 INLINEFORM16 train irony2non-irony with RL INLINEFORM17 in I INLINEFORM18 update INLINEFORM19 , INLINEFORM20 , using INLINEFORM21 similar to Eq. EQREF29 INLINEFORM22 back-translation INLINEFORM23 INLINEFORM24 INLINEFORM25 update INLINEFORM26 , INLINEFORM27 , INLINEFORM28 , INLINEFORM29 using MLE based on Eq. EQREF20
Our Method
Given two non-parallel corpora: non-ironic corpus N={ INLINEFORM0 , INLINEFORM1 , ..., INLINEFORM2 } and ironic corpus I={ INLINEFORM3 , INLINEFORM4 , ..., INLINEFORM5 }, the goal of our irony generation model is to generate an ironic sentence from a non-ironic sentence while preserving the content and sentiment polarity of the source input sentence. We implement an encoder-decoder framework where two encoders are utilized to encode ironic sentences and non-ironic sentences respectively and two decoders are utilized to decode ironic sentences and non-ironic sentences from latent representations respectively. In order to enforce a shared latent space, we share two layers on both the encoder side and the decoder side. Our model architecture is illustrated in Figure FIGREF13 . We denote irony encoder as INLINEFORM6 , irony decoder as INLINEFORM7 and non-irony encoder as INLINEFORM8 , non-irony decoder as INLINEFORM9 . Their parameters are INLINEFORM10 , INLINEFORM11 , INLINEFORM12 and INLINEFORM13 .
Our irony generation algorithm is shown in Algorithm SECREF3 . We first pre-train our model using denoising auto-encoder and back-translation to build up language models for both styles (section SECREF14 ). Then we implement reinforcement learning to train the model to transfer sentences from one style to another (section SECREF21 ). Meanwhile, to achieve content preservation, we utilize back-translation for one time in every INLINEFORM0 time steps.
Pretraining
In order to build up our language model and preserve the content, we apply the auto-encoder model. To prevent the model from simply copying the input sentence, we randomly add some noises in the input sentence. Specifically, for every word in the input sentence, there is 10% chance that we delete it, 10 % chance that we duplicate it, 10% chance that we swap it with the next word, or it remains unchanged. We first encode the input sentence INLINEFORM0 or INLINEFORM1 with respective encoder INLINEFORM2 or INLINEFORM3 to obtain its latent representation INLINEFORM4 or INLINEFORM5 and reconstruct the input sentence with the latent representation and respective decoder. So we can get the reconstruction loss for auto-encoder INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
In addition to denoising auto-encoder, we implement back-translation BIBREF19 to generate a pseudo-parallel corpus. Suppose our model takes non-ironic sentence INLINEFORM0 as input. We first encode INLINEFORM1 with INLINEFORM2 to obtain its latent representation INLINEFORM3 and decode the latent representation with INLINEFORM4 to get a transferred sentence INLINEFORM5 . Then we encode INLINEFORM6 with INLINEFORM7 and decode its latent representation with INLINEFORM8 to reconstruct the original input sentence INLINEFORM9 . Therefore, our reconstruction loss for back-translation INLINEFORM10 : DISPLAYFORM0
And if our model takes ironic sentence INLINEFORM0 as input, we can get the reconstruction loss for back-translation as: DISPLAYFORM0
Reinforcement Learning
Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively.
A pre-trained binary irony classifier based on CNN BIBREF20 is used to evaluate how ironic a sentence is. We denote the parameter of the classifier as INLINEFORM0 and it is fixed during the training process.
In order to facilitate the transformation, we design the irony reward as the difference between the irony score of the input sentence and that of the output sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our irony reward is defined as: DISPLAYFORM0
where INLINEFORM0 denotes ironic style and INLINEFORM1 is the probability of that a sentence INLINEFORM2 is ironic.
To preserve the sentiment polarity of the input sentence, we also need to use classifiers to evaluate the sentiment polarity of the sentences. However, the sentiment analysis of ironic sentences and non-ironic sentences are different. In the case of figurative languages such as irony, sarcasm or metaphor, the sentiment polarity of the literal meaning may differ significantly from that of the intended figurative meaning BIBREF0 . As we aim to train our model to transfer sentences from non-ironic to ironic, using only one classifier is not enough. As a result, we implement two pre-trained sentiment classifiers for non-ironic sentences and ironic sentences respectively. We denote the parameter of the sentiment classifier for ironic sentences as INLINEFORM0 and that of the sentiment classifier for non-ironic sentences as INLINEFORM1 .
A challenge, when we implement two classifiers to evaluate the sentiment polarity, is that the two classifiers trained with different datasets may have different distributions of scores. That means we cannot directly calculate the sentiment reward with scores applied by two classifiers. To alleviate this problem and standardize the prediction results of two classifiers, we set a threshold for each classifier and subtract the respective threshold from scores applied by the classifier to obtain the comparative sentiment polarity score. We get the optimal threshold by maximizing the ability of the classifier according to the distribution of our training data.
We denote the threshold of ironic sentiment classifier as INLINEFORM0 and the threshold of non-ironic sentiment classifier as INLINEFORM1 . The standardized sentiment score is defined as INLINEFORM2 and INLINEFORM3 where INLINEFORM4 denotes the positive sentiment polarity and INLINEFORM5 is the probability of that a sentence is positive in sentiment polarity.
As mentioned above, the input sentence and the generated sentence should express the same sentiment. For example, if we input a non-ironic sentence “I hate to be ignored" which is negative in sentiment polarity, the generated ironic sentence should be also negative, such as “I love to be ignored". To achieve sentiment preservation, we design the sentiment reward as that one minus the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our sentiment reward is defined as: DISPLAYFORM0
To encourage our model to focus on both the irony accuracy and the sentiment preservation, we apply the harmonic mean of irony reward and sentiment reward: DISPLAYFORM0
Policy Gradient
The policy gradient algorithm BIBREF21 is a simple but widely-used algorithm in reinforcement learning. It is used to maximize the expected reward INLINEFORM0 . The objective function to minimize is defined as: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 is the reward of INLINEFORM2 and INLINEFORM3 is the input size.
Training Details
INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 in our model are Transformers BIBREF22 with 4 layers and 2 shared layers. The word embeddings of 128 dimensions are learned during the training process. Our maximum sentence length is set as 40. The optimizer is Adam BIBREF23 and the learning rate is INLINEFORM4 . The batch size is 32 and harmonic weight INLINEFORM5 in Eq.9 is 0.5. We set the interval INLINEFORM6 as 200. The model is pre-trained for 6 epochs and trained for 15 epochs for reinforcement learning.
Irony Classifier: We implement a CNN classifier trained with our irony dataset. All the CNN classifiers we utilize in this paper use the same parameters as BIBREF20 .
Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies. The LSTM network is trained with the dataset of Semeval 2015 Task 11 BIBREF0 which is used for the sentiment analysis of figurative language in twitter. Then, we use the positive ironies and negative ironies to train the CNN sentiment classifier for irony.
Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies. Then we use the positive and negative non-ironies to train the sentiment classifier for non-irony.
Baselines
We compare our model with the following state-of-art generative models:
BackTrans BIBREF7 : In BIBREF7 , authors propose a model using machine translation in order to preserve the meaning of the sentence while reducing stylistic properties.
Unpaired BIBREF10 : In BIBREF10 , researchers implement a method to remove emotional words and add desired sentiment controlled by reinforcement learning.
CrossAlign BIBREF6 : In BIBREF6 , authors leverage refined alignment of latent representations to perform style transfer and a cross-aligned auto-encoder is implemented.
CPTG BIBREF24 : An interpolated reconstruction loss is introduced in BIBREF24 and a discriminator is implemented to control attributes in this work.
DualRL BIBREF8 : In BIBREF8 , researchers use two reinforcement rewards simultaneously to control style accuracy and content preservation.
Evaluation Metrics
In order to evaluate sentiment preservation, we use the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. We call the value as sentiment delta (senti delta). Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated.
We first sample 50 non-ironic input sentences and their corresponding output sentences of different models. Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content). The best output is ranked with 1 and the worst output is ranked with 6. That means that the smaller our human evaluation value is, the better the corresponding model is.
Results and Discussions
Table TABREF35 shows the automatic evaluation results of the models in the transformation from non-ironic sentences to ironic sentences. From the results, our model obtains the best result in sentiment delta. The DualRL model achieves the highest result in other metrics, but most of its outputs are the almost same as the input sentences. So it is reasonable that DualRL system outperforms ours in these metrics but it actually does not transfer the non-ironic sentences to ironic sentences at all. From this perspective, we cannot view DualRL as an effective model for irony generation. In contrast, our model gets results close to those of DualRL and obtains a balance between irony accuracy, sentiment preservation, and content preservation if we also consider the irony accuracy discussed below.
And from human evaluation results shown in Table TABREF36 , our model gets the best average rank in irony accuracy. And as mentioned above, the DualRL model usually does not change the input sentence and outputs the same sentence. Therefore, it is reasonable that it obtains the best rank in sentiment and content preservation and ours is the second. However, it still demonstrates that our model, instead of changing nothing, transfers the style of the input sentence with content and sentiment preservation at the same time.
Case Study
In the section, we present some example outputs of different models. Table TABREF37 shows the results of the transformation from non-ironic sentences to ironic sentences. We can observe that: (1) The BackTrans system, the Unpaired system, the CrossAlign system and the CPTG system tends to generate sentences which are towards irony but do not preserve content. (2) The DualRL system preserves content and sentiment very well but even does not change the input sentence. (3) Our model considers both aspects and achieves a better balance among irony accuracy, sentiment and content preservation.
Error Analysis
Although our model outperforms other style transfer baselines according to automatic and human evaluation results, there are still some failure cases because irony generation is still a very challenging task. We would like to share the issues we meet during our experiments and our solutions to some of them in this section.
No Change: As mentioned above, many style transfer models, such as DualRL, tend to make few changes to the input sentence and output the same sentence. Actually, this is a common issue for unsupervised style transfer systems and we also meet it during our experiments. The main reason for the issue is that rewards for content preservation are too prominent and rewards for style accuracy cannot work well. In contrast, in order to guarantee the readability and fluency of the output sentence, we also cannot emphasize too much on rewards for style accuracy because it may cause some other issues such as word repetition mentioned below. A method to solve the problem is tuning hyperparameters and this is also the method we implement in this work. As for content preservation, maybe MLE methods such as back-translation are not enough because they tend to force models to generate specific words. In the future, we should further design some more suitable methods to control content preservation for models without disentangling style and content representations, such as DualRL and ours.
Word Repetition: During our experiments, we observe that some of the outputs prefer to repeat the same word as shown in Table TABREF38 . This is because reinforcement learning rewards encourage the model to generate words which can get high scores from classifiers and even back-translation cannot stop it. Our solution is that we can lower the probability of decoding a word in decoders if the word has been generated in the previous time steps during testing. We also try to implement this method during training time but obtain worse performances because it may limit the effects of training. Some previous studies utilize language models to control the fluency of the output sentence and we also try this method. Nonetheless, pre-training a language model with tweets and using it to generate rewards is difficult because tweets are more casual and have more noise. Rewards from that kind of language model are usually not accurate and may confuse the model. In the future, we should come up with better methods to model language fluency with the consideration of irony accuracy, sentiment and content preservation, especially for tweets.
Improper Words: As ironic style is hard for our model to learn, it may generate some improper words which make the sentence strange. As the example shown in the Table TABREF38 , the sentiment word in the input sentence is “wonderful" and the model should change it into a negative word such as “sad" to make the output sentence ironic. However, the model changes “friday" and “fifa" which are not related to ironic styles. We have not found a very effective method to address this issue and maybe we should further explore stronger models to learn ironic styles better.
Additional Experiments
In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences. Sometimes ironies are hard to understand and may cause misunderstanding, for which our task also explores the transformation from ironic sentences to non-ironic sentences.
As shown in Table TABREF46 , we also conduct automatic evaluations and the conclusions are similar to those of the transformation from non-ironic sentences to ironic sentences. As for human evaluation results in Table TABREF47 , our model still can achieve the second-best results in sentiment and content preservation. Nevertheless, DualRL system and ours get poor performances in irony accuracy. The reason may be that the other four baselines tend to generate common and even not fluent sentences which are irrelevant to the input sentences and are hard to be identified as ironies. So annotators usually mark these output sentences as non-ironic sentences, which causes these models to obtain better performances than DualRL and ours but much poorer results in sentiment and content preservation. Some examples are shown in Table TABREF52 .
Conclusion and Future Work
In this paper, we first systematically define irony generation based on style transfer. Because of the lack of irony data, we make use of twitter and build a large-scale dataset. In order to control irony accuracy, sentiment preservation and content preservation at the same time, we also design a combination of rewards for reinforcement learning and incorporate reinforcement learning with a pre-training process. Experimental results demonstrate that our model outperforms other generative models and our rewards are effective. Although our model design is effective, there are still many errors and we systematically analyze them. In the future, we are interested in exploring these directions and our work may extend to other kinds of ironies which are more difficult to model.
|
What is the combination of rewards for reinforcement learning?
|
irony accuracy, sentiment preservation
| 4,592
|
qasper
|
8k
|
Introduction
Text classification has become an indispensable task due to the rapid growth in the number of texts in digital form available online. It aims to classify different texts, also called documents, into a fixed number of predefined categories, helping to organize data, and making easier for users to find the desired information. Over the past three decades, many methods based on machine learning and statistical models have been applied to perform this task, such as latent semantic analysis (LSA), support vector machines (SVM), and multinomial naive Bayes (MNB).
The first step in utilizing such methods to categorize textual data is to convert the texts into a vector representation. One of the most popular text representation models is the bag-of-words model BIBREF0 , which represents each document in a collection as a vector in a vector space. Each dimension of the vectors represents a term (e.g., a word, a sequence of words), and its value encodes a weight, which can be how many times the term occurs in the document.
Despite showing positive results in tasks such as language modeling and classification BIBREF1 , BIBREF2 , BIBREF3 , the BOW representation has limitations: first, feature vectors are commonly very high-dimensional, resulting in sparse document representations, which are hard to model due to space and time complexity. Second, BOW does not consider the proximity of words and their position in the text and consequently cannot encode the words semantic meanings.
To solve these problems, neural networks have been employed to learn vector representations of words BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . In particular, the word2vec representation BIBREF8 has gained attention. Given a training corpus, word2vec can generate a vector for each word in the corpus that encodes its semantic information. These word vectors are distributed in such a way that words from similar contexts are represented by word vectors with high correlation, while words from different contexts are represented by word vectors with low correlation.
One crucial aspect of the word2vec representation is that arithmetic and distance calculation between two word vectors can be performed, giving information about their semantic relationship. However, rather than looking at pairs of word vectors, we are interested in studying the relationship between sets of vectors as a whole and, therefore, it is desirable to have a text representation based on a set of these word vectors.
To tackle this problem, we introduce the novel concept of word subspace. It is mathematically defined as a low dimensional linear subspace in a word vector space with high dimensionality. Given that words from texts of the same class belong to the same context, it is possible to model word vectors of each class as word subspaces and efficiently compare them in terms of similarity by using canonical angles between the word subspaces. Through this representation, most of the variability of the class is retained. Consequently, a word subspace can effectively and compactly represent the context of the corresponding text. We achieve this framework through the mutual subspace method (MSM) BIBREF9 .
The word subspace of each text class is modeled by applying PCA without data centering to the set of word vectors of the class. When modeling the word subspaces, we assume only one occurrence of each word inside the class.
However, as seen in the BOW approach, the frequency of words inside a text is an informative feature that should be considered. In order to introduce this feature in the word subspace modeling and enhance its performance, we further extend the concept of word subspace to the term-frequency (TF) weighted word subspace.
In this extension, we consider a set of weights, which encodes the words frequencies, when performing the PCA. Text classification with TF weighted word subspace can also be performed under the framework of MSM. We show the validity of our modeling through experiments on the Reuters database, an established database for natural language processing tasks. We demonstrate the effectiveness of the word subspace formulation and its extension, comparing our methods' performance to various state-of-art methods.
The main contributions of our work are:
The remainder of this paper is organized as follows. In Section "Related Work" , we describe the main works related to text classification. In Section "Word subspace" , we present the formulation of our proposed word subspace. In Section "Conventional text classification methods" , we explain how text classification with word subspaces is performed under the MSM framework. Then, we present the TF weighted word subspace extension in Section "TF weighted word subspace" . Evaluation experiments and their results are described in Section "Experimental Evaluation" . Further discussion is then presented in Section "Discussion" , and our conclusions are described in Section "Conclusions and Future Work" .
Related Work
In this section, we outline relevant work towards text classification. We start by describing how text data is conventionally represented using the bag-of-words model and then follow to describe the conventional methods utilized in text classification.
Text Representation with bag-of-words
The bag-of-words representation comes from the hypothesis that frequencies of words in a document can indicate the relevance of the document to a query BIBREF0 , that is, if documents and a query have similar frequencies for the same words, they might have a similar meaning. This representation is based on the vector space model (VSM), that was developed for the SMART information retrieval system BIBREF10 . In the VSM, the main idea is that documents in a collection can be represented as a vector in a vector space, where vectors close to each other represent semantically similar documents.
More formally, a document $d$ can be represented by a vector in $\mathbb {R}^{n}$ , where each dimension represents a different term. A term can be a single word, constituting the conventional bag-of-words, or combinations of $N$ words, constituting the bag-of-N-grams. If a term occurs in the document, its position in the vector will have a non-zero value, also known as term weight. Two documents in the VSM can be compared to each other by taking the cosine distance between them BIBREF1 .
There are several ways to compute the term weights. Among them, we can highlight some: Binary weights, term-frequency (TF) weights, and term-frequency inverse document-frequency (TF-IDF) weights.
Consider a corpus with documents $D = \lbrace d_i\rbrace _{i=1}^{|D|}$ and a vocabulary with all terms in the corpus $V = \lbrace w_i\rbrace _{i=1}^{|V|}$ . The term weights can be defined as:
Binary weight: If a term occurs in the document, its weight is 1. Otherwise, it is zero.
Term-frequency weight (TF): The weight of a term $w$ is defined by the number of times it occurs in the document $d$ .
$$TF(w,d) = n_d^w$$ (Eq. 8)
Inverse document-frequency: The weight of a term $w$ , given the corpus $D$ , is defined as the total number of documents $|D|$ divided by the number of documents that have the term $w$ , $|D^w|$ .
$$IDF(w | D) = \frac{|D|}{|D^w|}$$ (Eq. 10)
Term-frequency inverse document-frequency (TF-IDF): The weight of a term $w$ is defined by the multiplication of its term-frequency and its inverse document-frequency. When considering only the TF weights, all terms have the same importance among the corpus. By using the IDF weight, words that are more common across all documents in $D$ receive a smaller weight, giving more importance to rare terms in the corpus.
$$TFIDF(w,d | D)=TF \times IDF$$ (Eq. 12)
In very large corpus, it is common to consider the logarithm of the IDF in order to dampen its effect.
$$TFIDF(w,d | D)=TF \times log_{10}(IDF)$$ (Eq. 13)
Conventional text classification methods
Multi-variate Bernoulli (MVB) and multinomial naive Bayes (MNB) are two generative models based on the naive Bayes assumption. In other words, they assume that all attributes (e.g., the frequency of each word, the presence or absence of a word) of each text are independent of each other given the context of the class BIBREF11 .
In the MVB model, a document is represented by a vector generated by a bag-of-words with binary weights. In this case, a document can be considered an event, and the presence or the absence of the words to be the attributes of the event. On the other hand, the MNB model represents each document as a vector generated by a bag-of-words with TF weights. Here, the individual word occurrences are considered as events and the document is a collection of word events.
Both these models use the Bayes rule to classify a document. Consider that each document should be classified into one of the classes in $C=\lbrace c_j\rbrace _{j=1}^{|C|}$ . The probability of each class given the document is defined as:
$$P(c_j|d_i) = \frac{P(d_i|c_j)P(c_j)}{P(d_i)}.$$ (Eq. 16)
The prior $P(d_i)$ is the same for all classes, so to determine the class to which $d_i$ belongs to, the following equation can be used:
$$prediction(d_i) = argmax_{c_j}P(d_i|c_j)P(c_j)$$ (Eq. 17)
The prior $P(c_j)$ can be obtained by the following equation:
$$P(c_j) = \frac{1+|D_j|}{|C|+|D|},$$ (Eq. 18)
where $|D_j|$ is the number of documents in class $c_j$ .
As for the posterior $P(d_i|c_j)$ , different calculations are performed for each model. For MVB, it is defined as:
$$P(d_i|c_j) = \prod _{k=1}^{|V|}P(w_k|c_j)^{t_i^k}(1-P(w_k|c_j))^{1-t_i^k},$$ (Eq. 19)
where $w_k$ is the k-th word in the vocabulary $V$ , and $t_i^k$ is the value (0 or 1) of the k-th element of the vector of document $d_i$ .
For the MNB, it is defined as:
$$P(d_i|c_j) = P(|d_i|)|d_i|!\prod _{k=1}^{|V|}\frac{P(w_k|c_j)^{n_i^k}}{n_i^k!},$$ (Eq. 20)
where $|d_i|$ is the number of words in document $d_i$ and $n_i^k$ is the k-th element of the vector of document $d_i$ and it represents how many times word $w_k$ occurs in $d_i$ .
Finally, the posterior $P(w_k|c_j)$ can be obtained by the following equation:
$$P(w_k|c_j) = \frac{1+|D_j^k|}{|C|+|D|},$$ (Eq. 21)
where $|D_j^k|$ is the number of documents in class $c_j$ that contain the word $w_k$ .
In general, MVB tends to perform better than MNB at small vocabulary sizes whereas MNB is more efficient on large vocabularies.
Despite being robust tools for text classification, both these models depend directly on the bag-of-words features and do not naturally work with representations such as word2vec.
Latent semantic analysis (LSA), or latent semantic indexing (LSI), was proposed in BIBREF12 , and it extends the vector space model by using singular value decomposition (SVD) to find a set of underlying latent variables which spans the meaning of texts.
It is built from a term-document matrix, in which each row represents a term, and each column represents a document. This matrix can be built by concatenating the vectors of all documents in a corpus, obtained using the bag-of-words model, that is, $ {X} = [ {v}_1, {v}_2, ..., {v}_{|D|}]$ , where ${v}_i$ is the vector representation obtained using the bag-of-words model.
In this method, the term-document matrix is decomposed using the singular value decomposition,
$${X} = {U\Sigma V}^\top ,$$ (Eq. 23)
where $U$ and $V$ are orthogonal matrices and correspond to the left singular vectors and right singular vectors of $X$ , respectively. $\Sigma $ is a diagonal matrix, and it contains the square roots of the eigenvalues of $X^TX$ and $XX^T$ . LSA finds a low-rank approximation of $X$ by selecting only the $k$ largest singular values and its respective singular vectors,
$${X}_k = {U}_k{\Sigma }_k {V}_k^{\top }.$$ (Eq. 24)
To compare two documents, we project both of them into this lower dimension space and calculate the cosine distance between them. The projection ${\hat{d}}$ of document ${d}$ is obtained by the following equation:
$${\hat{d}} = {\Sigma }_k^{-1} {U}_k^\top {d}.$$ (Eq. 25)
Despite its extensive application on text classification BIBREF13 , BIBREF14 , BIBREF15 , this method was initially proposed for document indexing and, therefore, does not encode any class information when modeling the low-rank approximation. To perform classification, 1-nearest neighbor is usually performed, placing a query document into the class of the nearest training document.
The support vector machine (SVM) was first presented in BIBREF16 and performs the separation between samples of two different classes by projecting them onto a higher dimensionality space. It was first applied in text classification by BIBREF17 and have since been successfully applied in many tasks related to natural language processing BIBREF18 , BIBREF19 .
Consider a training data set $D$ , with $n$ samples
$$D = \lbrace ({x}_i,c_i)|{x}_i\in \mathbb {R}^p, c_i \in \lbrace -1,1\rbrace \rbrace _{i=1}^{n},$$ (Eq. 27)
where $c_i$ represents the class to which ${x}_i$ belongs to. Each ${x}_i$ is a $p$ -dimensional vector. The goal is to find the hyperplane that divides the points from $c_i = 1$ from the points from $c_i = -1$ . This hyperplane can be written as a set of points $x$ satisfying:
$${w} \cdot {x} - b = 0,$$ (Eq. 28)
where $\cdot $ denotes the dot product. The vector ${w}$ is perpendicular to the hyperplane. The parameter $\frac{b}{\Vert {w}\Vert }$ determines the offset of the hyperplane from the origin along the normal vector ${w}$ .
We wish to choose ${w}$ and $b$ , so they maximize the distance between the parallel hyperplanes that are as far apart as possible, while still separating the data.
If the training data is linearly separable, we can select two hyperplanes in a way that there are no points between them and then try to maximize the distance. In other words, minimize $\Vert {w}\Vert $ subject to $c_i({w}\cdot {x}_u-b) \ge 1, i=\lbrace 1,2,...,n\rbrace $ . If the training data is not linearly separable, the kernel trick can be applied, where every dot product is replaced by a non-linear kernel function.
Word subspace
All methods mentioned above utilize the BOW features to represent a document. Although this representation is simple and powerful, its main problem lies on disregarding the word semantics within a document, where the context and meaning could offer many benefits to the model such as identification of synonyms.
In our formulation, words are represented as vectors in a real-valued feature vector space $\mathbb {R}^{p}$ , by using word2vec BIBREF8 . Through this representation, it is possible to calculate the distance between two words, where words from similar contexts are represented by vectors close to each other, while words from different contexts are represented as far apart vectors. Also, this representation brings the new concept of arithmetic operations between words, where operations such as addition and subtraction carry meaning (eg., “king”-“man”+“woman”=“queen”) BIBREF20 .
Consider a set of documents which belong to the same context $D_c = \lbrace d_i\rbrace _{i=1}^{|D_c|}$ . Each document $d_i$ is represented by a set of $N_i$ words, $d_i = \lbrace w_k\rbrace _{k=1}^{N_i}$ . By considering that all words from documents of the same context belong to the same distribution, a set of words $W_c = \lbrace w_k\rbrace _{k=1}^{N_c}$ with the words in the context $c$ is obtained.
We then translate these words into word vectors using word2vec, resulting in a set of word vectors $X_c = \lbrace {x}^k_c\rbrace _{k=1}^{N_c} \in \mathbb {R}^p$ . This set of word vectors is modeled into a word subspace, which is a compact, scalable and meaningful representation of the whole set. Such a word subspace is generated by applying PCA to the set of word vectors.
First, we compute an autocorrelation matrix, ${R}_c$ :
$${R}_c = \frac{1}{N_c}\sum _{i=1}^{N_c}{x}^{i}_c{x}_c^{i^{\top }}.$$ (Eq. 29)
The orthonormal basis vectors of $m_c$ -dimensional subspace ${Y}_c$ are obtained as the eigenvectors with the $m_c$ largest eigenvalues of the matrix ${R}_c$ . We represent a subspace ${Y}_c$ by the matrix ${Y}_c \in \mathbb {R}^{p \times m_c}$ , which has the corresponding orthonormal basis vectors as its column vectors.
Text classification based on word subspace
We formulate our problem as a single label classification problem. Given a set of training documents, which we will refer as corpus, $D = \lbrace d_i\rbrace _{i=1}^{|D|}$ , with known classes $C = \lbrace c_j\rbrace _{j=1}^{|C|}$ , we wish to classify a query document $d_q$ into one of the classes in $C$ .
Text classification based on word subspace can be performed under the framework of mutual subspace method (MSM). This task involves two different stages: A learning stage, where the word subspace for each class is modeled, and a classification stage, where the word subspace for a query is modeled and compared to the word subspaces of the classes.
In the learning stage, it is assumed that all documents of the same class belong to the same context, resulting in a set of words $W_c = \lbrace w_c^k\rbrace _{k=1}^{N_c}$ . This set assumes that each word appears only once in each class. Each set $\lbrace W_c\rbrace _{c=1}^{|C|}$ is then modeled into a word subspace ${Y}_c$ , as explained in Section "Word subspace" . As the number of words in each class may vary largely, the dimension $m_c$ of each class word subspace is not set to the same value.
In the classification stage, for a query document $d_q$ , it is also assumed that each word occurs only once, generating a subspace ${Y}_q$ .
To measure the similarity between a class word subspace ${Y}_c$ and a query word subspace ${Y}_q$ , the canonical angles between the two word subspaces are used BIBREF21 . There are several methods for calculating canonical angles BIBREF22 , BIBREF23 , and BIBREF24 , among which the simplest and most practical is the singular value decomposition (SVD). Consider, for example, two subspaces, one from the training data and another from the query, represented as matrices of bases, ${Y}_{c} = [{\Phi }_{1} \ldots {\Phi }_{m_c}] \in \mathbb {R}^{p \times m_c}$ and ${Y}_{q} = [{\Psi }_{1} \ldots {\Psi }_{m_q}] \in \mathbb {R}^{p \times m_q}$ , where ${\Phi }_{i}$ are the bases for ${Y}_c$ and ${\Psi }_{i}$ are the bases for ${Y}_q$ . Let the SVD of ${Y}_c^{\top }{Y}_q \in \mathbb {R}^{m_c \times m_q}$ be ${Y}_c^{\top }{Y}_q = {U \Sigma V}^{\top }$ , where ${Y}_q$0 , ${Y}_q$1 represents the set of singular values. The canonical angles ${Y}_q$2 can be obtained as ${Y}_q$3 ${Y}_q$4 . The similarity between the two subspaces is measured by ${Y}_q$5 angles as follows:
$$S_{({Y}_c,{Y}_q)}[t] = \frac{1}{t}\sum _{i = 1}^{t} \cos ^{2} \theta _{i},\; 1 \le t \le m_q, \; m_q \le m_c.$$ (Eq. 30)
Fig. 1 shows the modeling and comparison of sets of words by MSM. This method can compare sets of different sizes, and naturally encodes proximity between sets with related words.
Finally, the class with the highest similarity with $d_q$ is assigned as the class of $d_q$ :
$$prediction(d_q) = argmax_c(S_{({Y}_c,{Y}_q)}).$$ (Eq. 32)
TF weighted word subspace
The word subspace formulation presented in Section "Word subspace" is a practical and compact way to represent sets of word vectors, retaining most of the variability of features. However, as seen in the BOW features, the frequency of words is relevant information that can improve the characterization of a text. To incorporate this information into the word subspace modeling, we propose an extension of the word subspace, called the term-frequency (TF) weighted word subspace.
Like the word subspace, the TF weighted word subspace is mathematically defined as a low-dimensional linear subspace in a word vector space with high dimensionality. However, a weighted version of the PCA BIBREF25 , BIBREF26 is utilized to incorporate the information given by the frequencies of words (term-frequencies). This TF weighted word subspace is equivalent to the word subspace if we consider all occurrences of the words.
Consider the set of word vectors $\lbrace {x}_c^k\rbrace _{k=1}^{N_c} \in \mathbb {R}^{p}$ , which represents each word in the context $c$ , and the set of weights $\lbrace \omega _i\rbrace _{i=1}^{N_c}$ , which represent the frequencies of the words in the context $c$ .
We incorporate these frequencies into the subspace calculation by weighting the data matrix ${X}$ as follows:
$${\widetilde{X}}={X}{\Omega }^{1/2},$$ (Eq. 33)
where ${X} \in \mathbb {R}^{p \times N_c}$ is a matrix containing the word vectors $\lbrace {x}_c^k\rbrace _{k=1}^{N_c}$ and ${\Omega }$ is a diagonal matrix containing the weights $\lbrace \omega _i\rbrace _{i=1}^{N_c}$ .
We then perform PCA by solving the SVD of the matrix ${\widetilde{X}}$ :
$${\widetilde{X}}={AMB}^{\top },$$ (Eq. 34)
where the columns of the orthogonal matrices ${A}$ and ${B}$ are, respectively, the left-singular vectors and right-singular vectors of the matrix ${\widetilde{X}}$ , and the diagonal matrix ${M}$ contains singular values of ${\widetilde{X}}$ .
Finally, the orthonormal basis vectors of the $m_c$ -dimensional TF weighted subspace ${W}$ are the column vectors in ${A}$ corresponding to the $m_c$ largest singular values in ${M}$ .
Text classification with TF weighted word subspace can also be performed under the framework of MSM. In this paper, we will refer to MSM with TF weighted word subspace as TF-MSM.
Experimental Evaluation
In this section we describe the experiments performed to demonstrate the validity of our proposed method and its extension. We used the Reuters-8 dataset without stop words from BIBREF27 aiming at single-label classification, which is a preprocessed format of the Reuters-21578. Words in the texts were considered as they appeared, without performing stemming or typo correction. This database has eight different classes with the number of samples varying from 51 to over 3000 documents, as can be seen in Table 1 .
To obtain the vector representation of words, we used a freely available word2vec model, trained by BIBREF8 , on approximately 100 billion words, which encodes the vector representation in $\mathbb {R}^{300}$ of over 3 million words from several different languages. Since we decided to focus on English words only, we filtered these vectors to about 800 thousand words, excluding all words with non-roman characters.
To show the validity of our word subspace representation for text classification and the proposed extension, we divided our experiment section into two parts: The first one aims to verify if sets of word vectors are suitable for subspace representation, and the second one puts our methods in practice in a text classification test, comparing our results with the conventional methods described in Section "Related Work" .
Evaluation of the word subspace representation
In this experiment, we modeled the word vectors from each class in the Reuters-8 database into a word subspace. The primary goal is to visualize how much of the text data can be represented by a lower dimensional subspace.
Subspace representations are very efficient in compactly represent data that is close to a normal distribution. This characteristic is due to the application of the PCA, that is optimal to find the direction with the highest variation within the data.
In PCA, the principal components give the directions of maximum variance, while their corresponding eigenvalues give the variance of the data in each of them. Therefore, by observing the distribution of the eigenvalues computed when performing PCA in the modeling of the subspaces, we can suggest if the data is suitable or not for subspace representation.
For each class, we normalized the eigenvalues by the largest one of the class. Fig. 2 shows the mean of the eigenvalues and the standard deviation among classes. It is possible to see that the first largest eigenvalues retain larger variance than the smallest ones. In fact, looking at the first 150 largest eigenvalues, we can see that they retain, on average, 86.37% of the data variance. Also, by observing the standard deviation, we can understand that the eigenvalues distribution among classes follows the same pattern, that is, most of the variance is in the first dimensions. This plot indicates that text data represented by vectors generated with word2vec is suitable for subspace representation.
Text classification experiment
In this experiment, we performed text classification among the classes in the Reuters-8 database. We compared the classification using the word subspace, and its weighted extension, based on MSM (to which we will refer as MSM and TF-MSM, respectively) with the baselines presented in Section "Related Work" : MVB, MNB, LSA, and SVM. Since none of the baseline methods work with vector set classification, we also compared to a simple baseline for comparing sets of vectors, defined as the average of similarities between all vector pair combinations of two given sets. For two matrices ${A}$ and ${B}$ , containing the sets of vectors $\lbrace {x}^{i}_a \rbrace _{i = 1}^{N_A}$ and $\lbrace {x}^{i}_b \rbrace _{i = 1}^{N_B}$ , respectively, where $N_A$ and $N_B$ are the number of main words in each set, the similarity is defined as:
$$Sim_{(A,B)} = \frac{1}{N_A N_B}\sum _{i}^{N_A}\sum _{j}^{N_B}{{x}_a^i}^{\top }{x}_b^j.$$ (Eq. 41)
We refer to this baseline as similarity average (SA). For this method, we only considered one occurrence of each word in each set.
Different features were used, depending on the method. Classification with SA, MSM, and TF-MSM was performed using word2vec features, to which we refer as w2v. For MVB, due to its nature, only bag-of-words features with binary weights were used (binBOW). For the same reason, we only used bag-of-words features with term-frequency weights (tfBOW) with MNB. Classification with LSA and SVM is usually performed using bag-of-words features and, therefore, we tested with binBOW, tfBOW, and with the term-frequency inverse document-frequency weight, tfidfBOW. We also tested them using word2vec vectors. In this case, we considered each word vector from all documents in each class to be a single sample.
To determine the dimensions of the class subspaces and query subspace of MSM and TF-MSM, and the dimension of the approximation performed by LSA, we performed a 10-fold cross validation, wherein each fold, the data were randomly divided into train (60%), validation (20%) and test set (20%).
The results can be seen in Table 2 . The simplest baseline, SA with w2v, achieved an accuracy rate of 78.73%. This result is important because it shows the validity of the word2vec representation, performing better than more elaborate methods based on BOW, such as MVB with binBOW.
LSA with BOW features was almost 10% more accurate than SA, where the best results with binary weights were achieved with an approximation with 130 dimensions, with TF weights were achieved with 50 dimensions, and with TF-IDF weights were achieved with 30 dimensions. SVM with BOW features was about 3% more accurate than LSA, with binary weights leading to a higher accuracy rate.
It is interesting to note that despite the reasonably high accuracy rates achieved using LSA and SVM with BOW features, they poorly performed when using w2v features.
Among the baselines, the best method was MNB with tfBOW features, with an accuracy of 91.47%, being the only conventional method to outperform MSM. MSM with w2v had an accuracy rate of 90.62%, with the best results achieved with word subspace dimensions for the training classes ranging from 150 to 181, and for the query ranging from 3 to 217. Incorporating the frequency information in the subspace modeling resulted in higher accuracy, with TF-MSM achieving 92.01%, with dimensions of word subspaces for training classes ranging from 150 to 172, and for the query, ranging from 2 to 109. To confirm that TF-MSM is significantly more accurate than MNB, we performed a t-test to compare their results. It resulted in a p-value of 0.031, which shows that at a 95% significance level, TF-MSM has produced better results.
Discussion
Given the observation of the eigenvalues distribution of word vectors, we could see that word vectors that belong to the same context, i.e., same class, are suitable for subspace representation. Our analysis showed that half of the word vector space dimensions suffice to represent most of the variability of the data in each class of the Reuters-8 database.
The results from the text classification experiment showed that subspace-based methods performed better than the text classification methods discussed in this work. Ultimately, our proposed TF weighted word subspace with MSM surpassed all the other methods. word2vec features are reliable tools to represent the semantic meaning of the words and when treated as sets of word vectors, they are capable of representing the content of texts. However, despite the fact that word vectors can be treated separately, conventional methods such as SVM and LSA may not be suitable for text classification using word vectors.
Among the conventional methods, LSA and SVM achieved about 86% and 89%, respectively, when using bag-of-words features. Interestingly, both methods had better performance when using binary weights. For LSA, we can see that despite the slight differences in the performance, tfidfBOW required approximations with smaller dimensions. SVM had the lowest accuracy rate when using the tfidfBOW features. One possible explanation for this is that TF-IDF weights are useful when rare words and very frequent words exist in the corpus, giving higher weights for rare words and lower weights for common words. Since we removed the stop words, the most frequent words among the training documents were not considered and, therefore, using TF-IDF weights did not improve the results.
Only MNB with tfBOW performed better than MSM. This result may be because tfBOW features encode the word frequencies, while MSM only considers a single occurrence of words. When incorporating the word frequencies with our TF weighted word subspace, we achieved a higher accuracy of 92.01%, performing better than MNB at a significance level of 95%.
Conclusions and Future Work
In this paper, we proposed a new method for text classification, based on the novel concept of word subspace under the MSM framework. We also proposed the term-frequency weighted word subspace which can incorporate the frequency of words directly in the modeling of the subspace by using a weighted version of PCA.
Most of the conventional text classification methods are based on the bag-of-words features, which are very simple to compute and had been proved to produce positive results. However, bag-of-words are commonly high dimensional models, with a sparse representation, which is computationally heavy to model. Also, bag-of-words fail to convey the semantic meaning of words inside a text. Due to these problems, neural networks started to be applied to generate a vector representation of words. Despite the fact that these representations can encode the semantic meaning of words, conventional methods do not work well when considering word vectors separately.
In our work, we focused on the word2vec representation, which can embed the semantic structure of words, rendering vector angles as a useful metric to show meaningful similarities between words. Our experiments showed that our word subspace modeling along with the MSM outperforms most of the conventional methods. Ultimately, our TF weighted subspace formulation resulted in significantly higher accuracy when compared to all conventional text classification methods discussed in this work. It is important to note that our method does not consider the order of the words in a text, resulting in a loss of context information. As a future work, we wish to extend our word subspace concept further in mainly two directions. First, we seek to encode word order, which may enrich the representation of context information. Second, we wish to model dynamic context change, enabling analysis of large documents, by having a long-short memory to interpret information using cues from different parts of a text.
Acknowledgment
This work is supported by JSPS KAKENHI Grant Number JP16H02842 and the Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) scholarship.
|
What can word subspace represent?
|
Word vectors, usually in the context of others within the same class
| 5,151
|
qasper
|
8k
|
Introduction
Wikipedia is the largest source of open and collaboratively curated knowledge in the world. Introduced in 2001, it has evolved into a reference work with around 5m pages for the English Wikipedia alone. In addition, entities and event pages are updated quickly via collaborative editing and all edits are encouraged to include source citations, creating a knowledge base which aims at being both timely as well as authoritative. As a result, it has become the preferred source of information consumption about entities and events. Moreso, this knowledge is harvested and utilized in building knowledge bases like YAGO BIBREF0 and DBpedia BIBREF1 , and used in applications like text categorization BIBREF2 , entity disambiguation BIBREF3 , entity ranking BIBREF4 and distant supervision BIBREF5 , BIBREF6 .
However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay. Consider the city of New Orleans and the state of Odisha which were severely affected by cyclones Hurricane Katrina and Odisha Cyclone, respectively. While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. Arguably Katrina and New Orleans are more popular entities, but Odisha Cyclone was also reported extensively in national and international news outlets. This highlights the lack of important facts in trunk and long-tail entity pages, even in the presence of relevant sources. In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages BIBREF7 .
To remedy these problems, it is important to identify information sources that contain novel and salient facts to a given entity page. However, not all information sources are equal. The online presence of major news outlets is an authoritative source due to active editorial control and their articles are also a timely container of facts. In addition, their use is in line with current Wikipedia editing practice, as is shown in BIBREF7 that almost 20% of current citations in all entity pages are news articles. We therefore propose news suggestion as a novel task that enhances entity pages and reduces delay while keeping its pages authoritative.
Existing efforts to populate Wikipedia BIBREF8 start from an entity page and then generate candidate documents about this entity using an external search engine (and then post-process them). However, such an approach lacks in (a) reproducibility since rankings vary with time with obvious bias to recent news (b) maintainability since document acquisition for each entity has to be periodically performed. To this effect, our news suggestion considers a news article as input, and determines if it is valuable for Wikipedia. Specifically, given an input news article INLINEFORM0 and a state of Wikipedia, the news suggestion problem identifies the entities mentioned in INLINEFORM1 whose entity pages can improve upon suggesting INLINEFORM2 . Most of the works on knowledge base acceleration BIBREF9 , BIBREF10 , BIBREF11 , or Wikipedia page generation BIBREF8 rely on high quality input sources which are then utilized to extract textual facts for Wikipedia page population. In this work, we do not suggest snippets or paraphrases but rather entire articles which have a high potential importance for entity pages. These suggested news articles could be consequently used for extraction, summarization or population either manually or automatically – all of which rely on high quality and relevant input sources.
We identify four properties of good news recommendations: salience, relative authority, novelty and placement. First, we need to identify the most salient entities in a news article. This is done to avoid pollution of entity pages with only marginally related news. Second, we need to determine whether the news is important to the entity as only the most relevant news should be added to a precise reference work. To do this, we compute the relative authority of all entities in the news article: we call an entity more authoritative than another if it is more popular or noteworthy in the real world. Entities with very high authority have many news items associated with them and only the most relevant of these should be included in Wikipedia whereas for entities of lower authority the threshold for inclusion of a news article will be lower. Third, a good recommendation should be able to identify novel news by minimizing redundancy coming from multiple news articles. Finally, addition of facts is facilitated if the recommendations are fine-grained, i.e., recommendations are made on the section level rather than the page level (placement).
Approach and Contributions. We propose a two-stage news suggestion approach to entity pages. In the first stage, we determine whether a news article should be suggested for an entity, based on the entity's salience in the news article, its relative authority and the novelty of the article to the entity page. The second stage takes into account the class of the entity for which the news is suggested and constructs section templates from entities of the same class. The generation of such templates has the advantage of suggesting and expanding entity pages that do not have a complete section structure in Wikipedia, explicitly addressing long-tail and trunk entities. Afterwards, based on the constructed template our method determines the best fit for the news article with one of the sections.
We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions.
Related Work
As we suggest a new problem there is no current work addressing exactly the same task. However, our task has similarities to Wikipedia page generation and knowledge base acceleration. In addition, we take inspiration from Natural Language Processing (NLP) methods for salience detection.
Wikipedia Page Generation is the problem of populating Wikipedia pages with content coming from external sources. Sauper and Barzilay BIBREF8 propose an approach for automatically generating whole entity pages for specific entity classes. The approach is trained on already-populated entity pages of a given class (e.g. `Diseases') by learning templates about the entity page structure (e.g. diseases have a treatment section). For a new entity page, first, they extract documents via Web search using the entity title and the section title as a query, for example `Lung Cancer'+`Treatment'. As already discussed in the introduction, this has problems with reproducibility and maintainability. However, their main focus is on identifying the best paragraphs extracted from the collected documents. They rank the paragraphs via an optimized supervised perceptron model for finding the most representative paragraph that is the least similar to paragraphs in other sections. This paragraph is then included in the newly generated entity page. Taneva and Weikum BIBREF12 propose an approach that constructs short summaries for the long tail. The summaries are called `gems' and the size of a `gem' can be user defined. They focus on generating summaries that are novel and diverse. However, they do not consider any structure of entities, which is present in Wikipedia.
In contrast to BIBREF8 and BIBREF12 , we actually focus on suggesting entire documents to Wikipedia entity pages. These are authoritative documents (news), which are highly relevant for the entity, novel for the entity and in which the entity is salient. Whereas relevance in Sauper and Barzilay is implicitly computed by web page ranking we solve that problem by looking at relative authority and salience of an entity, using the news article and entity page only. As Sauper and Barzilay concentrate on empty entity pages, the problem of novelty of their content is not an issue in their work whereas it is in our case which focuses more on updating entities. Updating entities will be more and more important the bigger an existing reference work is. Both the approaches in BIBREF8 and BIBREF12 (finding paragraphs and summarization) could then be used to process the documents we suggest further. Our concentration on news is also novel.
Knowledge Base Acceleration. In this task, given specific information extraction templates, a given corpus is analyzed in order to find worthwhile mentions of an entity or snippets that match the templates. Balog BIBREF9 , BIBREF10 recommend news citations for an entity. Prior to that, the news articles are classified for their appropriateness for an entity, where as features for the classification task they use entity, document, entity-document and temporal features. The best performing features are those that measure similarity between an entity and the news document. West et al. BIBREF13 consider the problem of knowledge base completion, through question answering and complete missing facts in Freebase based on templates, i.e. Frank_Zappa bornIn Baltymore, Maryland.
In contrast, we do not extract facts for pre-defined templates but rather suggest news articles based on their relevance to an entity. In cases of long-tail entities, we can suggest to add a novel section through our abstraction and generation of section templates at entity class level.
Entity Salience. Determining which entities are prominent or salient in a given text has a long history in NLP, sparked by the linguistic theory of Centering BIBREF14 . Salience has been used in pronoun and co-reference resolution BIBREF15 , or to predict which entities will be included in an abstract of an article BIBREF11 . Frequent features to measure salience include the frequency of an entity in a document, positioning of an entity, grammatical function or internal entity structure (POS tags, head nouns etc.). These approaches are not currently aimed at knowledge base generation or Wikipedia coverage extension but we postulate that an entity's salience in a news article is a prerequisite to the news article being relevant enough to be included in an entity page. We therefore use the salience features in BIBREF11 as part of our model. However, these features are document-internal — we will show that they are not sufficient to predict news inclusion into an entity page and add features of entity authority, news authority and novelty that measure the relations between several entities, between entity and news article as well as between several competing news articles.
Terminology and Problem Definition
We are interested in named entities mentioned in documents. An entity INLINEFORM0 can be identified by a canonical name, and can be mentioned differently in text via different surface forms. We canonicalize these mentions to entity pages in Wikipedia, a method typically known as entity linking. We denote the set of canonicalized entities extracted and linked from a news article INLINEFORM1 as INLINEFORM2 . For example, in Figure FIGREF7 , entities are canonicalized into Wikipedia entity pages (e.g. Odisha is canonicalized to the corresponding article). For a collection of news articles INLINEFORM3 , we further denote the resulting set of entities by INLINEFORM4 .
Information in an entity page is organized into sections and evolves with time as more content is added. We refer to the state of Wikipedia at a time INLINEFORM0 as INLINEFORM1 and the set of sections for an entity page INLINEFORM2 as its entity profile INLINEFORM3 . Unlike news articles, text in Wikipedia could be explicitly linked to entity pages through anchors. The set of entities explicitly referred in text from section INLINEFORM4 is defined as INLINEFORM5 . Furthermore, Wikipedia induces a category structure over its entities, which is exploited by knowledge bases like YAGO (e.g. Barack_Obama isA Person). Consequently, each entity page belongs to one or more entity categories or classes INLINEFORM6 . Now we can define our news suggestion problem below:
Definition 1 (News Suggestion Problem) Given a set of news articles INLINEFORM0 and set of Wikipedia entity pages INLINEFORM1 (from INLINEFORM2 ) we intend to suggest a news article INLINEFORM3 published at time INLINEFORM4 to entity page INLINEFORM5 and additionally to the most relevant section for the entity page INLINEFORM6 .
Approach Overview
We approach the news suggestion problem by decomposing it into two tasks:
AEP: Article–Entity placement
ASP: Article–Section placement
In this first step, for a given entity-news pair INLINEFORM0 , we determine whether the given news article INLINEFORM1 should be suggested (we will refer to this as `relevant') to entity INLINEFORM2 . To generate such INLINEFORM3 pairs, we perform the entity linking process, INLINEFORM4 , for INLINEFORM5 .
The article–entity placement task (described in detail in Section SECREF16 ) for a pair INLINEFORM0 outputs a binary label (either `non-relevant' or `relevant') and is formalized in Equation EQREF14 . DISPLAYFORM0
In the second step, we take into account all `relevant' pairs INLINEFORM0 and find the correct section for article INLINEFORM1 in entity INLINEFORM2 , respectively its profile INLINEFORM3 (see Section SECREF30 ). The article–section placement task, determines the correct section for the triple INLINEFORM4 , and is formalized in Equation EQREF15 . DISPLAYFORM0
In the subsequent sections we describe in details how we approach the two tasks for suggesting news articles to entity pages.
News Article Suggestion
In this section, we provide an overview of the news suggestion approach to Wikipedia entity pages (see Figure FIGREF7 ). The approach is split into two tasks: (i) article-entity (AEP) and (ii) article-section (ASP) placement. For a Wikipedia snapshot INLINEFORM0 and a news corpus INLINEFORM1 , we first determine which news articles should be suggested to an entity INLINEFORM2 . We will denote our approach for AEP by INLINEFORM3 . Finally, we determine the most appropriate section for the ASP task and we denote our approach with INLINEFORM4 .
In the following, we describe the process of learning the functions INLINEFORM0 and INLINEFORM1 . We introduce features for the learning process, which encode information regarding the entity salience, relative authority and novelty in the case of AEP task. For the ASP task, we measure the overall fit of an article to the entity sections, with the entity being an input from AEP task. Additionally, considering that the entity profiles INLINEFORM2 are incomplete, in the case of a missing section we suggest and expand the entity profiles based on section templates generated from entities of the same class INLINEFORM3 (see Section UID34 ).
Article–Entity Placement
In this step we learn the function INLINEFORM0 to correctly determine whether INLINEFORM1 should be suggested for INLINEFORM2 , basically a binary classification model (0=`non-relevant' and 1=`relevant'). Note that we are mainly interested in finding the relevant pairs in this task. For every news article, the number of disambiguated entities is around 30 (but INLINEFORM3 is suggested for only two of them on average). Therefore, the distribution of `non-relevant' and `relevant' pairs is skewed towards the earlier, and by simply choosing the `non-relevant' label we can achieve a high accuracy for INLINEFORM4 . Finding the relevant pairs is therefore a considerable challenge.
An article INLINEFORM0 is suggested to INLINEFORM1 by our function INLINEFORM2 if it fulfills the following properties. The entity INLINEFORM3 is salient in INLINEFORM4 (a central concept), therefore ensuring that INLINEFORM5 is about INLINEFORM6 and that INLINEFORM7 is important for INLINEFORM8 . Next, given the fact there might be many articles in which INLINEFORM9 is salient, we also look at the reverse property, namely whether INLINEFORM10 is important for INLINEFORM11 . We do this by comparing the authority of INLINEFORM12 (which is a measure of popularity of an entity, such as its frequency of mention in a whole corpus) with the authority of its co-occurring entities in INLINEFORM13 , leading to a feature we call relative authority. The intuition is that for an entity that has overall lower authority than its co-occurring entities, a news article is more easily of importance. Finally, if the article we are about to suggest is already covered in the entity profile INLINEFORM14 , we do not wish to suggest redundant information, hence the novelty. Therefore, the learning objective of INLINEFORM15 should fulfill the following properties. Table TABREF21 shows a summary of the computed features for INLINEFORM16 .
Salience: entity INLINEFORM0 should be a salient entity in news article INLINEFORM1
Relative Authority: the set of entities INLINEFORM0 with which INLINEFORM1 co-occurs should have higher authority than INLINEFORM2 , making INLINEFORM3 important for INLINEFORM4
Novelty: news article INLINEFORM0 should provide novel information for entity INLINEFORM1 taking into account its profile INLINEFORM2
Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.
Relative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. The decay corresponds to the positional index of the news paragraph. This is inspired by the news-specific discourse structure that tends to give short summaries of the most important facts and entities in the opening paragraphs. (iii) it compares entity frequency to the frequency of its co-occurring mentions as the weight of an entity appearing in a specific paragraph, normalized by the sum of the frequencies of other entities in INLINEFORM2 . DISPLAYFORM0
where, INLINEFORM0 represents a news paragraph from INLINEFORM1 , and with INLINEFORM2 we indicate the set of all paragraphs in INLINEFORM3 . The frequency of INLINEFORM4 in a paragraph INLINEFORM5 is denoted by INLINEFORM6 . With INLINEFORM7 and INLINEFORM8 we indicate the number of paragraphs in which entity INLINEFORM9 occurs, and the total number of paragraphs, respectively.
Relative Authority. In this case, we consider the comparative relevance of the news article to the different entities occurring in it. As an example, let us consider the meeting of the Sudanese bishop Elias Taban with Hillary Clinton. Both entities are salient for the meeting. However, in Taban's Wikipedia page, this meeting is discussed prominently with a corresponding news reference, whereas in Hillary Clinton's Wikipedia page it is not reported at all. We believe this is not just an omission in Clinton's page but mirrors the fact that for the lesser known Taban the meeting is big news whereas for the more famous Clinton these kind of meetings are a regular occurrence, not all of which can be reported in what is supposed to be a selection of the most important events for her. Therefore, if two entities co-occur, the news is more relevant for the entity with the lower a priori authority.
The a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction).
Starting from a priori authority, we proceed to relative authority by comparing the a priori authority of co-occurring entities in INLINEFORM0 . We define the relative authority of INLINEFORM1 as the proportion of co-occurring entities INLINEFORM2 that have a higher a priori authority than INLINEFORM3 (see Equation EQREF28 . DISPLAYFORM0
As we might run the danger of not suggesting any news articles for entities with very high a priori authority (such as Clinton) due to the strict inequality constraint, we can relax the constraint such that the authority of co-occurring entities is above a certain threshold.
News Domain Authority. The news domain authority addresses two main aspects. Firstly, if bundled together with the relative authority feature, we can ensure that dependent on the entity authority, we suggest news from authoritative sources, hence ensuring the quality of suggested articles. The second aspect is in a news streaming scenario where multiple news domains report the same event — ideally only articles coming from authoritative sources would fulfill the conditions for the news suggestion task.
The news domain authority is computed based on the number of news references in Wikipedia coming from a particular news domain INLINEFORM0 . This represents a simple prior that a news article INLINEFORM1 is from domain INLINEFORM2 in corpus INLINEFORM3 . We extract the domains by taking the base URLs from the news article URLs.
An important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3 . Studies BIBREF17 have shown that on comparable collections to ours (TREC GOV2) the number of duplicates can go up to INLINEFORM4 . This figure is likely higher for major events concerning highly authoritative entities on which all news media will report.
Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 .
N(n|e) = n'Nt-1{DKL((n') || (n)) + DKL((N) || (n)).
DKL((n') || (n)). (1-) jaccard((n'),(n))} where INLINEFORM0 is the KL divergence of the language models ( INLINEFORM1 and INLINEFORM2 ), whereas INLINEFORM3 is the mixing weight ( INLINEFORM4 ) between the language models INLINEFORM5 and the entity overlap in INLINEFORM6 and INLINEFORM7 .
Here we introduce the evaluation setup and analyze the results for the article–entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. A detailed explanation on why we focus on the `relevant' pairs is provided in Section SECREF16 .
Baselines. We consider the following baselines for this task.
B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .
B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .
Learning Models. We use Random Forests (RF) BIBREF23 . We learn the RF on all computed features in Table TABREF21 . The optimization on RF is done by splitting the feature space into multiple trees that are considered as ensemble classifiers. Consequently, for each classifier it computes the margin function as a measure of the average count of predicting the correct class in contrast to any other class. The higher the margin score the more robust the model.
Metrics. We compute precision P, recall R and F1 score for the relevant class. For example, precision is the number of news-entity pairs we correctly labeled as relevant compared to our ground truth divided by the number of all news-entity pairs we labeled as relevant.
The following results measure the effectiveness of our approach in three main aspects: (i) overall performance of INLINEFORM0 and comparison to baselines, (ii) robustness across the years, and (iii) optimal model for the AEP placement task.
Performance. Figure FIGREF55 shows the results for the years 2009 and 2013, where we optimized the learning objective with instances from year INLINEFORM0 and evaluate on the years INLINEFORM1 (see Section SECREF46 ). The results show the precision–recall curve. The red curve shows baseline B1 BIBREF11 , and the blue one shows the performance of INLINEFORM2 . The curve shows for varying confidence scores (high to low) the precision on labeling the pair INLINEFORM3 as `relevant'. In addition, at each confidence score we can compute the corresponding recall for the `relevant' label. For high confidence scores on labeling the news-entity pairs, the baseline B1 achieves on average a precision score of P=0.50, while INLINEFORM4 has P=0.93. We note that with the drop in the confidence score the corresponding precision and recall values drop too, and the overall F1 score for B1 is around F1=0.2, in contrast we achieve an average score of F1=0.67.
It is evident from Figure FIGREF55 that for the years 2009 and 2013, INLINEFORM0 significantly outperforms the baseline B1. We measure the significance through the t-test statistic and get a p-value of INLINEFORM1 . The improvement we achieve over B1 in absolute numbers, INLINEFORM2 P=+0.5 in terms of precision for the years between 2009 and 2014, and a similar improvement in terms of F1 score. The improvement for recall is INLINEFORM3 R=+0.4. The relative improvement over B1 for P and F1 is almost 1.8 times better, while for recall we are 3.5 times better. In Table TABREF58 we show the overall scores for the evaluation metrics for B1 and INLINEFORM4 . Finally, for B2 we achieve much poorer performance, with average scores of P=0.21, R=0.20 and F1=0.21.
Robustness. In Table TABREF58 , we show the overall performance for the years between 2009 and 2013. An interesting observation we make is that we have a very robust performance and the results are stable across the years. If we consider the experimental setup, where for year INLINEFORM0 we optimize the learning objective with only 74k training instances and evaluate on the rest of the instances, it achieves a very good performance. We predict with F1=0.68 the remaining 469k instances for the years INLINEFORM1 .
The results are particularly promising considering the fact that the distribution between our two classes is highly skewed. On average the number of `relevant' pairs account for only around INLINEFORM0 of all pairs. A good indicator to support such a statement is the kappa (denoted by INLINEFORM1 ) statistic. INLINEFORM2 measures agreement between the algorithm and the gold standard on both labels while correcting for chance agreement (often expected due to extreme distributions). The INLINEFORM3 scores for B1 across the years is on average INLINEFORM4 , while for INLINEFORM5 we achieve a score of INLINEFORM6 (the maximum score for INLINEFORM7 is 1).
In Figure FIGREF60 we show the impact of the individual feature groups that contribute to the superior performance in comparison to the baselines. Relative entity frequency from the salience feature, models the entity salience as an exponentially decaying function based on the positional index of the paragraph where the entity appears. The performance of INLINEFORM0 with relative entity frequency from the salience feature group is close to that of all the features combined. The authority and novelty features account to a further improvement in terms of precision, by adding roughly a 7%-10% increase. However, if both feature groups are considered separately, they significantly outperform the baseline B1.
Article–Section Placement
We model the ASP placement task as a successor of the AEP task. For all the `relevant' news entity pairs, the task is to determine the correct entity section. Each section in a Wikipedia entity page represents a different topic. For example, Barack Obama has the sections `Early Life', `Presidency', `Family and Personal Life' etc. However, many entity pages have an incomplete section structure. Incomplete or missing sections are due to two Wikipedia properties. First, long-tail entities miss information and sections due to their lack of popularity. Second, for all entities whether popular or not, certain sections might occur for the first time due to real world developments. As an example, the entity Germanwings did not have an `Accidents' section before this year's disaster, which was the first in the history of the airline.
Even if sections are missing for certain entities, similar sections usually occur in other entities of the same class (e.g. other airlines had disasters and therefore their pages have an accidents section). We exploit such homogeneity of section structure and construct templates that we use to expand entity profiles. The learning objective for INLINEFORM0 takes into account the following properties:
Section-templates: account for incomplete section structure for an entity profile INLINEFORM0 by constructing section templates INLINEFORM1 from an entity class INLINEFORM2
Overall fit: measures the overall fit of a news article to sections in the section templates INLINEFORM0
Given the fact that entity profiles are often incomplete, we construct section templates for every entity class. We group entities based on their class INLINEFORM0 and construct section templates INLINEFORM1 . For different entity classes, e.g. Person and Location, the section structure and the information represented in those section varies heavily. Therefore, the section templates are with respect to the individual classes in our experimental setup (see Figure FIGREF42 ). DISPLAYFORM0
Generating section templates has two main advantages. Firstly, by considering class-based profiles, we can overcome the problem of incomplete individual entity profiles and thereby are able to suggest news articles to sections that do not yet exist in a specific entity INLINEFORM0 . The second advantage is that we are able to canonicalize the sections, i.e. `Early Life' and `Early Life and Childhood' would be treated similarly.
To generate the section template INLINEFORM0 , we extract all sections from entities of a given type INLINEFORM1 at year INLINEFORM2 . Next, we cluster the entity sections, based on an extended version of k–means clustering BIBREF18 , namely x–means clustering introduced in Pelleg et al. which estimates the number of clusters efficiently BIBREF19 . As a similarity metric we use the cosine similarity computed based on the tf–idf models of the sections. Using the x–means algorithm we overcome the requirement to provide the number of clusters k beforehand. x–means extends the k–means algorithm, such that a user only specifies a range [ INLINEFORM3 , INLINEFORM4 ] that the number of clusters may reasonably lie in.
The learning objective of INLINEFORM0 is to determine the overall fit of a news article INLINEFORM1 to one of the sections in a given section template INLINEFORM2 . The template is pre-determined by the class of the entity for which the news is suggested as relevant by INLINEFORM3 . In all cases, we measure how well INLINEFORM4 fits each of the sections INLINEFORM5 as well as the specific entity section INLINEFORM6 . The section profiles in INLINEFORM7 represent the aggregated entity profiles from all entities of class INLINEFORM8 at year INLINEFORM9 .
To learn INLINEFORM0 we rely on a variety of features that consider several similarity aspects as shown in Table TABREF31 . For the sake of simplicity we do not make the distinction in Table TABREF31 between the individual entity section and class-based section similarities, INLINEFORM1 and INLINEFORM2 , respectively. Bear in mind that an entity section INLINEFORM3 might be present at year INLINEFORM4 but not at year INLINEFORM5 (see for more details the discussion on entity profile expansion in Section UID69 ).
Topic. We use topic similarities to ensure (i) that the content of INLINEFORM0 fits topic-wise with a specific section text and (ii) that it has a similar topic to previously referred news articles in that section. In a pre-processing stage we compute the topic models for the news articles, entity sections INLINEFORM1 and the aggregated class-based sections in INLINEFORM2 . The topic models are computed using LDA BIBREF20 . We only computed a single topic per article/section as we are only interested in topic term overlaps between article and sections. We distinguish two main features: the first feature measures the overlap of topic terms between INLINEFORM3 and the entity section INLINEFORM4 and INLINEFORM5 , and the second feature measures the overlap of the topic model of INLINEFORM6 against referred news articles in INLINEFORM7 at time INLINEFORM8 .
Syntactic. These features represent a mechanism for conveying the importance of a specific text snippet, solely based on the frequency of specific POS tags (i.e. NNP, CD etc.), as commonly used in text summarization tasks. Following the same intuition as in BIBREF8 , we weigh the importance of articles by the count of specific POS tags. We expect that for different sections, the importance of POS tags will vary. We measure the similarity of POS tags in a news article against the section text. Additionally, we consider bi-gram and tri-gram POS tag overlap. This exploits similarity in syntactical patterns between the news and section text.
Lexical. As lexical features, we measure the similarity of INLINEFORM0 against the entity section text INLINEFORM1 and the aggregate section text INLINEFORM2 . Further, we distinguish between the overall similarity of INLINEFORM3 and that of the different news paragraphs ( INLINEFORM4 which denotes the paragraphs of INLINEFORM5 up to the 5th paragraph). A higher similarity on the first paragraphs represents a more confident indicator that INLINEFORM6 should be suggested to a specific section INLINEFORM7 . We measure the similarity based on two metrics: (i) the KL-divergence between the computed language models and (ii) cosine similarity of the corresponding paragraph text INLINEFORM8 and section text.
Entity-based. Another feature set we consider is the overlap of named entities and their corresponding entity classes. For different entity sections, we expect to find a particular set of entity classes that will correlate with the section, e.g. `Early Life' contains mostly entities related to family, school, universities etc.
Frequency. Finally, we gather statistics about the number of entities, paragraphs, news article length, top– INLINEFORM0 entities and entity classes, and the frequency of different POS tags. Here we try to capture patterns of articles that are usually cited in specific sections.
Evaluation Plan
In this section we outline the evaluation plan to verify the effectiveness of our learning approaches. To evaluate the news suggestion problem we are faced with two challenges.
What comprises the ground truth for such a task ?
How do we construct training and test splits given that entity pages consists of text added at different points in time ?
Consider the ground truth challenge. Evaluating if an arbitrary news article should be included in Wikipedia is both subjective and difficult for a human if she is not an expert. An invasive approach, which was proposed by Barzilay and Sauper BIBREF8 , adds content directly to Wikipedia and expects the editors or other users to redact irrelevant content over a period of time. The limitations of such an evaluation technique is that content added to long-tail entities might not be evaluated by informed users or editors in the experiment time frame. It is hard to estimate how much time the added content should be left on the entity page. A more non-invasive approach could involve crowdsourcing of entity and news article pairs in an IR style relevance assessment setup. The problem of such an approach is again finding knowledgeable users or experts for long-tail entities. Thus the notion of relevance of a news recommendation is challenging to evaluate in a crowd setup.
We take a slightly different approach by making an assumption that the news articles already present in Wikipedia entity pages are relevant. To this extent, we extract a dataset comprising of all news articles referenced in entity pages (details in Section SECREF40 ). At the expense of not evaluating the space comprising of news articles absent in Wikipedia, we succeed in (i) avoiding restrictive assumptions about the quality of human judgments, (ii) being invasive and polluting Wikipedia, and (iii) deriving a reusable test bed for quicker experimentation.
The second challenge of construction of training and test set separation is slightly easier and is addressed in Section SECREF46 .
Datasets
The datasets we use for our experimental evaluation are directly extracted from the Wikipedia entity pages and their revision history. The generated data represents one of the contributions of our paper. The datasets are the following:
Entity Classes. We focus on a manually predetermined set of entity classes for which we expect to have news coverage. The number of analyzed entity classes is 27, including INLINEFORM0 entities with at least one news reference. The entity classes were selected from the DBpedia class ontology. Figure FIGREF42 shows the number of entities per class for the years (2009-2014).
News Articles. We extract all news references from the collected Wikipedia entity pages. The extracted news references are associated with the sections in which they appear. In total there were INLINEFORM0 news references, and after crawling we end up with INLINEFORM1 successfully crawled news articles. The details of the news article distribution, and the number of entities and sections from which they are referred are shown in Table TABREF44 .
Article-Entity Ground-truth. The dataset comprises of the news and entity pairs INLINEFORM0 . News-entity pairs are relevant if the news article is referenced in the entity page. Non-relevant pairs (i.e. negative training examples) consist of news articles that contain an entity but are not referenced in that entity's page. If a news article INLINEFORM1 is referred from INLINEFORM2 at year INLINEFORM3 , the features are computed taking into account the entity profiles at year INLINEFORM4 .
Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited. Similar to the article-entity ground truth, here too the features compute the similarity between INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
Data Pre-Processing
We POS-tag the news articles and entity profiles INLINEFORM0 with the Stanford tagger BIBREF21 . For entity linking the news articles, we use TagMe! BIBREF22 with a confidence score of 0.3. On a manual inspection of a random sample of 1000 disambiguated entities, the accuracy is above 0.9. On average, the number of entities per news article is approximately 30. For entity linking the entity profiles, we simply follow the anchor text that refers to Wikipedia entities.
Train and Testing Evaluation Setup
We evaluate the generated supervised models for the two tasks, AEP and ASP, by splitting the train and testing instances. It is important to note that for the pairs INLINEFORM0 and the triple INLINEFORM1 , the news article INLINEFORM2 is referenced at time INLINEFORM3 by entity INLINEFORM4 , while the features take into account the entity profile at time INLINEFORM5 . This avoids any `overlapping' content between the news article and the entity page, which could affect the learning task of the functions INLINEFORM6 and INLINEFORM7 . Table TABREF47 shows the statistics of train and test instances. We learn the functions at year INLINEFORM8 and test on instances for the years greater than INLINEFORM9 . Please note that we do not show the performance for year 2014 as we do not have data for 2015 for evaluation.
Article-Section Placement
Here we show the evaluation setup for ASP task and discuss the results with a focus on three main aspects, (i) the overall performance across the years, (ii) the entity class specific performance, and (iii) the impact on entity profile expansion by suggesting missing sections to entities based on the pre-computed templates.
Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:
S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2
S2: Place the news into the most frequent section in INLINEFORM0
Learning Models. We use Random Forests (RF) BIBREF23 and Support Vector Machines (SVM) BIBREF24 . The models are optimized taking into account the features in Table TABREF31 . In contrast to the AEP task, here the scale of the number of instances allows us to learn the SVM models. The SVM model is optimized using the INLINEFORM0 loss function and uses the Gaussian kernels.
Metrics. We compute precision P as the ratio of news for which we pick a section INLINEFORM0 from INLINEFORM1 and INLINEFORM2 conforms to the one in our ground-truth (see Section SECREF40 ). The definition of recall R and F1 score follows from that of precision.
Figure FIGREF66 shows the overall performance and a comparison of our approach (when INLINEFORM0 is optimized using SVM) against the best performing baseline S2. With the increase in the number of training instances for the ASP task the performance is a monotonically non-decreasing function. For the year 2009, we optimize the learning objective of INLINEFORM1 with around 8% of the total instances, and evaluate on the rest. The performance on average is around P=0.66 across all classes. Even though for many classes the performance is already stable (as we will see in the next section), for some classes we improve further. If we take into account the years between 2010 and 2012, we have an increase of INLINEFORM2 P=0.17, with around 70% of instances used for training and the remainder for evaluation. For the remaining years the total improvement is INLINEFORM3 P=0.18 in contrast to the performance at year 2009.
On the other hand, the baseline S1 has an average precision of P=0.12. The performance across the years varies slightly, with the year 2011 having the highest average precision of P=0.13. Always picking the most frequent section as in S2, as shown in Figure FIGREF66 , results in an average precision of P=0.17, with a uniform distribution across the years.
Here we show the performance of INLINEFORM0 decomposed for the different entity classes. Specifically we analyze the 27 classes in Figure FIGREF42 . In Table TABREF68 , we show the results for a range of years (we omit showing all years due to space constraints). For illustration purposes only, we group them into four main classes ( INLINEFORM1 Person, Organization, Location, Event INLINEFORM2 ) and into the specific sub-classes shown in the second column in Table TABREF68 . For instance, the entity classes OfficeHolder and Politician are aggregated into Person–Politics.
It is evident that in the first year the performance is lower in contrast to the later years. This is due to the fact that as we proceed, we can better generalize and accurately determine the correct fit of an article INLINEFORM0 into one of the sections from the pre-computed templates INLINEFORM1 . The results are already stable for the year range INLINEFORM2 . For a few Person sub-classes, e.g. Politics, Entertainment, we achieve an F1 score above 0.9. These additionally represent classes with a sufficient number of training instances for the years INLINEFORM3 . The lowest F1 score is for the Criminal and Television classes. However, this is directly correlated with the insufficient number of instances.
The baseline approaches for the ASP task perform poorly. S1, based on lexical similarity, has a varying performance for different entity classes. The best performance is achieved for the class Person – Politics, with P=0.43. This highlights the importance of our feature choice and that the ASP cannot be considered as a linear function, where the maximum similarity yields the best results. For different entity classes different features and combination of features is necessary. Considering that S2 is the overall best performing baseline, through our approach INLINEFORM0 we have a significant improvement of over INLINEFORM1 P=+0.64.
The models we learn are very robust and obtain high accuracy, fulfilling our pre-condition for accurate news suggestions into the entity sections. We measure the robustness of INLINEFORM0 through the INLINEFORM1 statistic. In this case, we have a model with roughly 10 labels (corresponding to the number of sections in a template INLINEFORM2 ). The score we achieve shows that our model predicts with high confidence with INLINEFORM3 .
The last analysis is the impact we have on expanding entity profiles INLINEFORM0 with new sections. Figure FIGREF70 shows the ratio of sections for which we correctly suggest an article INLINEFORM1 to the right section in the section template INLINEFORM2 . The ratio here corresponds to sections that are not present in the entity profile at year INLINEFORM3 , that is INLINEFORM4 . However, given the generated templates INLINEFORM5 , we can expand the entity profile INLINEFORM6 with a new section at time INLINEFORM7 . In details, in the absence of a section at time INLINEFORM8 , our model trains well on similar sections from the section template INLINEFORM9 , hence we can predict accurately the section and in this case suggest its addition to the entity profile. With time, it is obvious that the expansion rate decreases at later years as the entity profiles become more `complete'.
This is particularly interesting for expanding the entity profiles of long-tail entities as well as updating entities with real-world emerging events that are added constantly. In many cases such missing sections are present at one of the entities of the respective entity class INLINEFORM0 . An obvious case is the example taken in Section SECREF16 , where the `Accidents' is rather common for entities of type Airline. However, it is non-existent for some specific entity instances, i.e Germanwings airline.
Through our ASP approach INLINEFORM0 , we are able to expand both long-tail and trunk entities. We distinguish between the two types of entities by simply measuring their section text length. The real distribution in the ground truth (see Section SECREF40 ) is 27% and 73% are long-tail and trunk entities, respectively. We are able to expand the entity profiles for both cases and all entity classes without a significant difference, with the only exception being the class Creative Work, where we expand significantly more trunk entities.
Conclusion and Future Work
In this work, we have proposed an automated approach for the novel task of suggesting news articles to Wikipedia entity pages to facilitate Wikipedia updating. The process consists of two stages. In the first stage, article–entity placement, we suggest news articles to entity pages by considering three main factors, such as entity salience in a news article, relative authority and novelty of news articles for an entity page. In the second stage, article–section placement, we determine the best fitting section in an entity page. Here, we remedy the problem of incomplete entity section profiles by constructing section templates for specific entity classes. This allows us to add missing sections to entity pages. We carry out an extensive experimental evaluation on 351,983 news articles and 73,734 entities coming from 27 distinct entity classes. For the first stage, we achieve an overall performance with P=0.93, R=0.514 and F1=0.676, outperforming our baseline competitors significantly. For the second stage, we show that we can learn incrementally to determine the correct section for a news article based on section templates. The overall performance across different classes is P=0.844, R=0.885 and F1=0.860.
In the future, we will enhance our work by extracting facts from the suggested news articles. Results suggest that the news content cited in entity pages comes from the first paragraphs. However, challenging task such as the canonicalization and chronological ordering of facts, still remain.
|
What baseline model is used?
|
For Article-Entity placement, they consider two baselines: the first one using only salience-based features, and the second baseline checks if the entity appears in the title of the article.
For Article-Section Placement, they consider two baselines: the first picks the section with the highest lexical similarity to the article, and the second one picks the most frequent section.
| 7,891
|
qasper
|
8k
|
Introduction
Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).
Recent successes in QA, driven largely by the creation of new resources BIBREF2, BIBREF3, BIBREF4, BIBREF5 and advances in model pre-training BIBREF6, BIBREF7, raise a natural question: do state-of-the-art multiple-choice QA (MCQA) models that excel at standard tasks really have basic knowledge and reasoning skills?
Most existing MCQA datasets are constructed through either expensive crowd-sourcing BIBREF8 or hand engineering effort, in the former case making it possible to collect large amounts of data at the cost of losing systematic control over the semantics of the target questions. Hence, doing a controlled experiment to answer such a question for QA is difficult given a lack of targeted challenge datasets.
Having definitive empirical evidence of model competence on any given phenomenon requires constructing a wide range of systematic tests. For example, in measuring competence of definitions, not only do we want to see that the model can handle individual questions such as Figure FIGREF1.1 inside of benchmark tasks, but that it can answer a wider range of questions that exhaustively cover a broad set of concepts and question perturbations (i.e., systematic adjustments to how the questions are constructed). The same applies to ISA reasoning; not only is it important to recognize in the question in Figure FIGREF1.1 that cooking is a learned behavior, but also that cooking is a general type of behavior or, through a few more inferential steps, a type of human activity.
In this paper, we look at systematically constructing such tests by exploiting the vast amounts of structured information contained in various types of expert knowledge such as knowledge graphs and lexical taxonomies. Our general methodology works as illustrated in Figure FIGREF1: given any MCQA model trained on a set of benchmark tasks, we systematically generate a set of synthetic dataset probes (i.e., MCQA renderings of the target information) from information in expert knowledge sources. We then use these probes to ask two empirical questions: 1) how well do models trained on benchmark tasks perform on these probing tasks and; 2) can such models be re-trained to master new challenges with minimal performance loss on their original tasks?
While our methodology is amenable to any knowledge source and set of models/benchmark tasks, we focus on probing state-of-the-art transformer models BIBREF7, BIBREF9 in the domain of science MCQA. For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources. We devise probes that measure model competence in definition and taxonomic knowledge in different settings (including hypernymy, hyponymy, and synonymy detection, and word sense disambiguation). This choice is motivated by fact that the science domain is considered particularly challenging for QA BIBREF10, BIBREF11, BIBREF12, and existing science benchmarks are known to involve widespread use of such knowledge (see BIBREF1, BIBREF13 for analysis), which is also arguably fundamental to more complex forms of reasoning.
We show that accurately probing QA models via synthetic datasets is not straightforward, as unexpected artifacts can easily arise in such data. This motivates our carefully constructed baselines and close data inspection to ensure probe quality.
Our results confirm that transformer-based QA models have a remarkable ability to recognize certain types of knowledge captured in our probes—even without additional fine-tuning. Such models can even outperform strong task-specific models trained directly on our probing tasks (e.g., on definitions, our best model achieves 77% test accuracy without specialized training, as opposed to 51% for a task-specific LSTM-based model). We also show that the same models can be effectively re-fine-tuned on small samples (even 100 examples) of probe data, and that high performance on the probes tends to correlate with a smaller drop in the model's performance on the original QA task.
Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning.
Related Work
We follow recent work on constructing challenge datasets for probing neural models, which has primarily focused on the task of natural language inference (NLI) BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18. Most of this work looks at constructing data through adversarial generation methods, which have also been found useful for creating stronger models BIBREF19. There has also been work on using synthetic data of the type we consider in this paper BIBREF20, BIBREF21, BIBREF22. We closely follow the methodology of BIBREF22, who use hand-constructed linguistic fragments to probe NLI models and study model re-training using a variant of the inoculation by fine-tuning strategy of BIBREF23. In contrast, we focus on probing open-domain MCQA models (see BIBREF24 for a related study in the reading comprehension setting) as well as constructing data from much larger sources of structured knowledge.
Our main study focuses on probing the BERT model and fine-tuning approach of BIBREF7, and other variants thereof, which are all based on the transformer architecture of BIBREF25. Related to our efforts, there have been recent studies into the types of relational knowledge contained in large-scale knowledge models BIBREF26, BIBREF27, BIBREF28, which, similar to our work, probe models using structured knowledge sources. This prior work, however, primarily focuses on unearthing the knowledge contained in the underlying language models as is without further training, using simple (single token) cloze-style probing tasks and templates (similar to what we propose in Section SECREF3). In contrast, we focus on understanding the knowledge contained in language models after they have been trained for a QA end-task using benchmark datasets in which such knowledge is expected to be widespread. Further, our evaluation is done before and after these models are fine-tuned on our probe QA tasks, using a more complex set of QA templates and target inferences.
The use of lexical resources and knowledge graphs such as WordNet to construct datasets has a long history, and has recently appeared in work on adversarial attacks BIBREF14, BIBREF29 and general task construction BIBREF30, BIBREF31. In the area of MCQA, there is related work on constructing questions from tuples BIBREF32, BIBREF3, both of which involve standard crowd annotation to elicit question-answer pairs (see also BIBREF33, BIBREF34). In contrast to this work, we focus on generating data in an entirely automatic fashion, which obviates the need for expensive annotation and gives us the flexibility to construct much larger datasets that control a rich set of semantic aspects of the target questions.
Dataset Probes and Construction
Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\textbf {q}$ and a set of answer choices or candidates $\lbrace a_{1},...a_{N}\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.
For convenience, we will describe each source of expert knowledge as a directed, edge-labeled graph $G$. The nodes of this graph are $\mathcal {V} = \mathcal {C} \cup \mathcal {W} \cup \mathcal {S} \cup \mathcal {D}$, where $\mathcal {C}$ is a set of atomic concepts, $\mathcal {W}$ a set of words, $\mathcal {S}$ a set of sentences, and $\mathcal {D}$ a set of definitions (see Table TABREF4 for details for WordNet and GCIDE). Each edge of $G$ is directed from an atomic concept in $\mathcal {C}$ to another node in $V$, and is labeled with a relation, such as hypernym or isa$^\uparrow $, from a set of relations $\mathcal {R}$ (see Table TABREF4).
When defining our probe question templates, it will be useful to view $G$ as a set of (relation, source, target) triples $\mathcal {T} \subseteq \mathcal {R} \times \mathcal {C} \times \mathcal {V}$. Due to their origin in an expert knowledge source, such triples preserve semantic consistency. For instance, when the relation in a triple is def, the corresponding edge maps a concept in $\mathcal {C}$ to a definition in $\mathcal {D}$.
To construct probe datasets, we rely on two heuristic functions, defined below for each individual probe: $\textsc {gen}_{\mathcal {Q}}(\tau )$, which generates gold question-answer pairs $(\textbf {q},\textbf {a})$ from a set of triples $\tau \subseteq \mathcal {T}$ and question templates $\mathcal {Q}$, and $\textsc {distr}(\tau ^{\prime })$, which generates distractor answers choices $\lbrace a^{\prime }_{1},...a^{\prime }_{N-1} \rbrace $ based on another set of triples $\tau ^{\prime }$ (where usually $\tau \subset \tau ^{\prime }$). For brevity, we will use $\textsc {gen}(\tau )$ to denote $\textsc {gen}_{\mathcal {Q}}(\tau )$, leaving question templates $\mathcal {Q}$ implicit.
Dataset Probes and Construction ::: WordNetQA
WordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\mathcal {D}$) and example sentences ($\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe.
Dataset Probes and Construction ::: WordNetQA ::: Example Generation @!START@$\textsc {gen}(\tau )$@!END@.
We build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\uparrow $), hyponymy (ISA$^{\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.
For example, suppose we wish to create a question $\textbf {q}$ about the definition of a target concept $c \in \mathcal {C}$. We first select a question template from $\mathcal {Q}$ that first introduces the concept $c$ and its lemma $l \in \mathcal {W}$ in context using the example sentence $s \in \mathcal {S}$, and then asks to identify the corresponding WordNet gloss $d \in \mathcal {D}$, which serves as the gold answer $\textbf {a}$. The same is done for ISA reasoning; each question about a hypernym/hyponym relation between two concepts $c \rightarrow ^{\uparrow /\downarrow } c^{\prime } \in \mathcal {T}_{i}$ (e.g., $\texttt {dog} \rightarrow ^{\uparrow /\downarrow } \texttt {animal/terrier}$) first introduces a context for $c$ and then asks for an answer that identifies $c^{\prime }$ (which is also provided with a gloss so as to contain all available context).
In the latter case, the rules $(\texttt {isa}^{r},c,c^{\prime }) \in \mathcal {T}_i$ in Table TABREF8 cover only direct ISA links from $c$ in direction $r \in \lbrace \uparrow ,\downarrow \rbrace $. In practice, for each $c$ and direction $r$, we construct tests that cover the set HOPS$(c,r)$ of all direct as well as derived ISA relations of $c$:
This allows us to evaluate the extent to which models are able to handle complex forms of reasoning that require several inferential steps or hops.
Dataset Probes and Construction ::: WordNetQA ::: Distractor Generation: @!START@$\textsc {distr}(\tau ^{\prime })$@!END@.
An example of how distractors are generated is shown in Figure FIGREF6, which relies on similar principles as above. For each concept $c$, we choose 4 distractor answers that are close in the WordNet semantic space. For example, when constructing hypernymy tests for $c$ from the set hops$(c,\uparrow )$, we build distractors by drawing from $\textsc {hops}(c,\downarrow )$ (and vice versa), as well as from the $\ell $-deep sister family of $c$, defined as follows. The 1-deep sister family is simply $c$'s siblings or sisters, i.e., the other children $\tilde{c} \ne c$ of the parent node $c^{\prime }$ of $c$. For $\ell > 1$, the $\ell $-deep sister family also includes all descendants of each $\tilde{c}$ up to $\ell -1$ levels deep, denoted $\textsc {hops}_{\ell -1}(\tilde{c},\downarrow )$. Formally:
For definitions and synonyms we build distractors from all of these sets (with a similar restriction on the depth of sister distractors as noted above). In doing this, we can systematically investigate model performance on a wide range of distractor sets.
Dataset Probes and Construction ::: WordNetQA ::: Perturbations and Semantic Clusters
Based on how we generate data, for each concept $c$ (i.e., atomic WordNet synset) and probe type (i.e., definitions, hypernymy, etc.), we have a wide variety of questions related to $c$ that manipulate 1) the complexity of reasoning that is involved (e.g., the number of inferential hops) and; 2) the types of distractors (or distractor perturbations) that are employed. We call such sets semantic clusters. As we describe in the next section, semantic clusters allow us to devise new types of evaluation that reveal whether models have comprehensive and consistent knowledge of target concepts (e.g., evaluating whether a model can correctly answer several questions associated with a concept, as opposed to a few disjoint instances).
Details of the individual datasets are shown in Table TABREF12. From these sets, we follow BIBREF22 in allocating a maximum of 3k examples for training and reserve the rest for development and testing. Since we are interested in probing, having large held-out sets allows us to do detailed analysis and cluster-based evaluation.
Dataset Probes and Construction ::: DictionaryQA
The DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.
Dataset Probes and Construction ::: DictionaryQA ::: Example and Distractor Generation.
To generate gold questions and answers, we use the same generation templates for definitions exemplified in Figure TABREF8 for WordNetQA. To generate distractors, we simply take alternative definitions for the target words that represent a different word sense (e.g., the alternative definitions of gift shown in Table TABREF14), as well as randomly chosen definitions if needed to create a 5-way multiple choice question. As above, we reserve a maximum of 3k examples for training. Since we have only 9k examples in total in this dataset (see WordSense in Table TABREF12), we also reserve 3k each for development and testing.
We note that initial attempts to build this dataset through standard random splitting gave rise to certain systematic biases that were exploited by the choice-only baseline models described in the next section, and hence inflated overall model scores. After several efforts at filtering we found that, among other factors, using definitions from entries without example sentences as distractors (e.g., the first two entries in Table TABREF14) had a surprising correlation with such biases. This suggests that possible biases involving differences between dictionary entries with and without examples can taint the resulting automatically generated MCQA dataset (for more discussion on the pitfalls involved with automatic dataset construction, see Section SECREF5).
Probing Methodology and Modeling
Given the probes above, we now can start to answer the empirical questions posed at the beginning. Our main focus is on looking at transformer-based MCQA models trained in the science domain (using the benchmarks shown in Table TABREF21). In this section, we provide details of MCQA and the target models, as well as several baselines that we use to sanity check our new datasets. To evaluate model competence, we look at a combination of model performance after science pre-training and after additional model fine-tuning using the lossless inoculation strategy of BIBREF22 (Section SECREF22). In Section SECREF24, we also discuss a cluster-level accuracy metric for measuring performance over semantic clusters.
Probing Methodology and Modeling ::: Task Definition and Modeling
Given a dataset $D =\lbrace (\textbf {q}^{(d)}, \lbrace a_{1}^{(d)},..., a_{N}^{(d)}\rbrace ) \rbrace _{d}^{\mid D \mid }$ consisting of pairs of questions stems $\textbf {q}$ and answer choices $a_{i}$, the goal is to find the correct answer $a_{i^{*}}$ that correctly answers each $\textbf {q}$. Throughout this paper, we look at 5-way multiple-choice problems (i.e., where each $N=5$).
Probing Methodology and Modeling ::: Task Definition and Modeling ::: Question+Answer Encoder.
To model this, our investigation centers around the use of the transformer-based BIBREF25 BERT encoder and fine-tuning approach of BIBREF7 (see also BIBREF6). For each question and individual answer pair $q^{(j)}_{a_{i}}$, we assume the following rendering of this input:
which is run through the pre-trained BERT encoder to generate a representation for $ q^{(j)}_{a_{i}}$ using the hidden state representation for CLS (i.e., the classifier token) $\textbf {c}_{i}$:
The probability of a given answer $p^{(j)}_{i}$ is then computed as $p^{(j)}_{i} \propto e^{\textbf {v}\cdot \textbf {c}^{(j)}_{i}}$, which uses an additional set of classification parameters $\textbf {v} \in \mathbb {R}^{H}$ that are optimized (along with the full transformer network) by taking the final loss of the probability of each correct answer $p_{i^{*}}$ over all answer choices:
We specifically use BERT-large uncased with whole-word masking, as well as the RoBERTa-large model from BIBREF9, which is a more robustly trained version of the original BERT model. Our system uses the implementations provided in AllenNLP BIBREF39 and Huggingface BIBREF40.
Probing Methodology and Modeling ::: Task Definition and Modeling ::: Baselines and Sanity Checks.
When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models.
Following the notation from BIBREF0, for any given sequence $s$ of tokens in $\lbrace q^{(j)}, a_{1}^{(j)},...,a_{N}^{(j)}\rbrace $ in $D$, an encoding of $s$ is given as $h_{s}^{(j)} = \textbf {BiLSTM}(\textsc {embed}(s)) \in \mathbb {R}^{|s| \times 2h}$ (where $h$ is the dimension of the hidden state in each directional network, and embed$(\cdot )$ is an embedding function that assigns token-level embeddings to each token in $s$). A contextual representation for each $s$ is then built by applying an element-wise max operation over $h_{s}$ as follows:
With these contextual representations, different baseline models can be constructed. For example, a Choice-Only model, which is a variant of the well-known hypothesis-only baseline used in NLI BIBREF46, scores each choice $c_{i}$ in the following way:
for $\textbf {W}^{T} \in \mathbb {R}^{2h}$ independently of the question and assigns a probability to each answer $p_{i}^{(j)} \propto e^{\alpha _{i}^{(j)}}$.
A slight variant of this model, the Choice-to-choice model, tries to single out a given answer choice relative to other choices by scoring all choice pairs $\alpha _{i,i^{\prime }}^{(j)} = \textsc {Att}(r^{(j)}_{c_{i}},r^{(j)}_{c_{i^{\prime }}}) \in \mathbb {R}$ using a learned attention mechanism Att and finding the choice with the minimal similarity to other options (for full details, see their original paper). In using these partial-input baselines, which we train directly on each target probe, we can check whether systematic biases related to answer choices were introduced into the data creation process.
A Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score $\alpha ^{(j)}_{q,i} = \textsc {Att}(r^{(j)}_{q},r^{(j)}_{c_{i}}) \in \mathbb {R}$ as above. Here we also experiment with using ESIM BIBREF47 to generate the contextual representations $r$, as well as a simpler VecSimilarity model that measures the average vector similarity between question and answer tokens: $\alpha ^{(j)}_{q,i} = \textsc {Sim}(\textsc {embed}(q^{(j)}),\textsc {embed}(c^{(j)}_{i}))$. In contrast to the models above, these sets of baselines are used to check for artifacts between questions and answers that are not captured in the partial-input baselines (see discussion in BIBREF49) and ensure that the overall MCQA tasks are sufficiently difficult for our transformer models.
Probing Methodology and Modeling ::: Inoculation and Pre-training
Using the various models introduced above, we train these models on benchmark tasks in the science domain and look at model performance on our probes with and without additional training on samples of probe data, building on the idea of inoculation from BIBREF23. Model inoculation is the idea of continuing to train models on new challenge tasks (in our cases, separately for each probe) using only a small amount of examples. Unlike in ordinary fine-tuning, the goal is not to learn an entirely re-purposed model, but to improve on (or vaccinate against) particular phenomena (e.g., our synthetic probes) that potentially deviate from a model's original training distribution (but that nonetheless might involve knowledge already contained in the model).
In the variant proposed in BIBREF22, for each pre-trained (science) model and architecture $M_{a}$ we continue training the model on $k$ new probe examples (with a maximum of $k=$ 3k) under a set of different hyper-parameter configurations $j \in \lbrace 1, ..., J\rbrace $ and identify, for each $k$, the model $M_{*}^{a,k}$ with the best aggregate performance $S$ on the original (orig) and new task:
As in BIBREF22, we found all models to be especially sensitive to different learning rates, and performed comprehensive hyper-parameters searches that also manipulate the number of iterations and random seeds used.
Using this methodology, we can see how much exposure to new data it takes for a given model to master a new task, and whether there are phenomena that stress particular models (e.g., lead to catastrophic forgetting of the original task). Given the restrictions on the number of fine-tuning examples, our assumption is that when models are able to maintain good performance on their original task during inoculation, the quickness with which they are able to learn the inoculated task provides evidence of prior competence, which is precisely what we aim to probe. To measure past performance, we define a model's inoculation cost as the difference in the performance of this model on its original task before and after inoculation.
We pre-train on an aggregated training set of the benchmark science exams detailed in Table TABREF21, and created an aggregate development set of around 4k science questions for evaluating overall science performance and inoculation costs. To handle the mismatch between number of answer choices in these sets, we made all sets 5-way by adding empty answers as needed. We also experimented with a slight variant of inoculation, called add-some inoculation, which involves balancing the inoculation training sets with naturalistic science questions. We reserve the MCQL dataset in Table TABREF21 for this purpose, and experiment with balancing each probe example with a science example (x1 matching) and adding twice as many science questions (x2 matching, up to 3k) for each new example.
Probing Methodology and Modeling ::: Evaluating Model Competence
The standard way to evaluate our MCQA models is by looking at the overall accuracy of the correct answer prediction, or what we call instance-level accuracy (as in Table TABREF25). Given the nature of our data and the existence of semantic clusters as detailed in Section SECREF11 (i.e., sets of questions and answers under different distractor choices and inference complexity), we also measure a model's cluster-level (or strict cluster) accuracy, which requires correctly answering all questions in a cluster. Example semantic clusters are shown in Table TABREF30; in the first case, there are 6 ISA$^\uparrow $ questions (including perturbations) about the concept trouser.n.01 (e.g., involving knowing that trousers are a type of consumer good and garment/clothing), which a model must answer in order to receive full credit.
Our cluster-based analysis is motivated by the idea that if a model truly knows the meaning of a given concept, such as the concept of trousers, then it should be able to answer arbitrary questions about this concept without sensitivity to varied distractors. While our strict cluster metric is simplistic, it takes inspiration from work on visual QA BIBREF53, and allows us to evaluate how consistent and robust models are across our different probes, and to get insight into whether errors are concentrated on a small set of concepts or widespread across clusters.
Results and Findings
In this section, we provide the results of the empirical questions first introduced in Figure FIGREF1, starting with the results of our baseline models.
Results and Findings ::: Are our Probes Sufficiently Challenging?
As shown in Table TABREF25, most of our partial-input baselines (i.e., Choice-Only and Choice-to-Choice models) failed to perform well on our dataset probes across a wide range of models, showing that such probes are generally immune from biases relating to how distractors were generated. As already discussed in Section SECREF13, however, initial versions of the DictionaryQA dataset had unforeseen biases partly related to whether distractors were sampled from entries without example sentences, which resulted in high Choice-Only-GloVe scores ranging around 56% accuracy before a filtering step was applied to remove these distractors.
We had similar issues with the hypernymy probe which, even after a filtering step that used our Choice-to-Choice-GloVe model, still leads to high results on the BERT and RoBERTa choice-only models. Given that several attempts were made to entirely de-duplicate the different splits (both in terms of gold answers and distractor types), the source of these biases is not at all obvious, which shows how easy it is for unintended biases in expert knowledge to appear in the resulting datasets and the importance of having rigorous baselines. We also note the large gap in some cases between the BERT and RoBERTa versus GloVe choice-only models, which highlights the need for having partial-input baselines that use the best available models.
Using a more conventional set of Task-Specific QA models (i.e., the LSTM-based Question-to-Choice models trained directly on the probes), we can see that results are not particularly strong on any of the datasets, suggesting that our probes are indeed sufficiently challenging and largely immune from overt artifacts. The poor performance of the VecSimilarity (which uses pre-trained Word2Vec embeddings without additional training) provides additional evidence that elementary lexical matching strategies are insufficient for solving any of the probing tasks.
Results and Findings ::: How well do pre-trained MCQA models do?
Science models that use non-transformer based encoders, such as the ESIM model with GloVe and ELMO, perform poorly across all probes, in many cases scoring near random chance, showing limits to how well they generalize from science to other tasks even with pre-trained GloVe and ELMO embeddings. In sharp contrast, the transformer models have mixed results, the most striking result being the RoBERTa models on the definitions and synonymy probes (achieving a test accuracy of 77% and 61%, respectively), which outperform several of the task-specific LSTM models trained directly on the probes. At first glance, this suggests that RoBERTa, which generally far outpaces even BERT across most probes, has high competence of definitions and synonyms even without explicit training on our new tasks.
Given the controlled nature of our probes, we can get a more detailed view of how well the science models are performing across different reasoning and distractor types, as shown in the first column of Figure FIGREF28 for ESIM and RoBERTa. The ESIM science model without training has uniformly poor performance across all categories, whereas the performance of RoBERTa is more varied. Across all datasets and number of hops (i.e., the rows in the heat maps), model performance for RoBERTa is consistently highest among examples with random distractors (i.e., the first column), and lowest in cases involving distractors that are closest in WordNet space (e.g., sister and ISA, or up/down, distractors of distance $k^{\prime }=1$). This is not surprising, given that, in the first case, random distractors are likely to be the easiest category (and the opposite for distractors close in space), but suggests that RoBERTa might only be getting the easiest cases correct.
Model performance also clearly degrades for hypernymy and hyponymy across all models as the number of hops $k$ increases (see red dashed boxes). For example, problems that involve hyponym reasoning with sister distractors of distance $k^{\prime }=1$ (i.e., the second column) degrades from 47% to 15% when the number of hops $k$ increases from 1 to 4. This general tendency persists even after additional fine-tuning, as we discuss next, and gives evidence that models are limited in their capacity for certain types of multi-hop inferences.
As discussed by BIBREF26, the choice of generation templates can have a significant effect on model performance. The results so far should therefore be regarded as a lower bound on model competence. It is possible that model performance is high for definitions, for example, because the associated templates best align with the science training distribution (which we know little about). For this reason, the subsequent inoculation step is important—it gives the model an opportunity to learn about our target templates and couple this learned knowledge with its general knowledge acquired during pre-training and science training (which is, again, what we aim to probe).
Results and Findings ::: Can Models Be Effectively Inoculated?
Model performance after additional fine-tuning, or inoculation, is shown in the last 3 rows of Table TABREF25, along with learning curves shown in Figure FIGREF29 for a selection of probes and models. In the former case, the performance represents the model (and inoculation amount) with the highest aggregate performance over the old task and new probe. Here we again see the transformer-based models outperform non-transformer models, and that better models correlate with lower inoculation costs. For example, when inoculating on synonymy, the cost for ESIM is around 7% reduced accuracy on its original task, as opposed to $< 1$% and around 1% for BERT and RoBERTa, respectively. This shows the high capacity for transformer models to absorb new tasks with minimal costs, as also observed in BIBREF22 for NLI.
As shown in Figure FIGREF29, transformer models tend to learn most tasks fairly quickly while keeping constant scores on their original tasks (i.e., the flat dashed lines observed in plots 1-4), which gives evidence of high competence. In both cases, add-some inoculation proves to be a cheap and easy way to 1) improve scores on the probing tasks (i.e., the solid black and blue lines in plot 1) and; 2) minimize loss on science (e.g., the blue and black dashed lines in plots 2-4). The opposite is the case for ESIM (plots 5-6); models are generally unable to simultaneously learn individual probes without degrading on their original task, and adding more science data during inoculation confuses models on both tasks.
As shown in Figure FIGREF28, RoBERTa is able to significantly improve performance across most categories even after inoculation with a mere 100 examples (the middle plot), which again provides strong evidence of prior competence. As an example, RoBERTa improves on 2-hop hyponymy inference with random distractors by 18% (from 59% to 77%). After 3k examples, the model has high performance on virtually all categories (the same score increases from 59% to 87%), however results still tends to degrade as a function of hop and distractor complexity, as discussed above.
Despite the high performance of our transformer models after inoculation, model performance on most probes (with the exception of Definitions) averages around 80% for our best models. This suggests that there is still considerable room for improvement, especially for synonymy and word sense, which is a topic that we discuss more in Section SECREF6.
Results and Findings ::: Are Models Consistent across Clusters?
Table TABREF32 shows cluster-level accuracies for the different WordNetQA probes. As with performance across the different inference/distractor categories, these results are mixed. For some probes, such as definitions, our best models appear to be rather robust; e.g., our RoBERTa model has a cluster accuracy of $75\%$, meaning that it can answer all questions perfectly for 75% of the target concepts and that errors are concentrated on a small minority (25%) of concepts. On synonymy and hypernymy, both BERT and RoBERTa appear robust on the majority of concepts, showing that errors are similarly concentrated. In contrast, our best model on hyponymy has an accuracy of 36%, meaning that its errors are spread across many concepts, thus suggesting less robustness.
Table TABREF30 shows a selection of semantic clusters involving ISA reasoning, as well as the model performance over different answers (shown symbolically) and perturbations. For example, in the the second case, the cluster is based around the concept/synset oppose.v.06 and involves 4 inferences and a total 24 questions (i.e., inferences with perturbations). Our weakest model, ESIM, answers only 5 out of 24 questions correctly, whereas RoBERTa gets 21/24. In the other cases, RoBERTa gets all clusters correct, whereas BERT and ESIM get none of them correct.
We emphasize that these results only provide a crude look into model consistency and robustness. Recalling again the details in Table TABREF12, probes differ in terms of average size of clusters. Hyponymy, in virtue of having many more questions per cluster, might simply be a much more difficult dataset. In addition, such a strict evaluation does not take into account potential errors inside of clusters, which is an important issue that we discuss in the next section. We leave addressing such issues and coming up with more insightful cluster-based metrics for future work.
Discussion and Conclusion
We presented several new challenge datasets and a novel methodology for automatically building such datasets from knowledge graphs and taxonomies. We used these to probe state-of-the-art open-domain QA models (centering around models based on variants of BERT). While our general methodology is amendable to any target knowledge resource or QA model/domain, we focus on probing definitions and ISA knowledge using open-source dictionaries and MCQA models trained in the science domain.
We find, consistent with recent probing studies BIBREF26, that transformer-based models have a remarkable ability to answer questions that involve complex forms of relational knowledge, both with and without explicit exposure to our new target tasks. In the latter case, a newer RoBERTa model trained only on benchmark science tasks is able to outperform several task-specific LSTM-based models trained directly on our probing data. When re-trained on small samples (e.g., 100 examples) of probing data using variations of the lossless inoculation strategy from BIBREF22, RoBERTa is able to master many aspects of our probes with virtually no performance loss on its original QA task.
These positive results suggest that transformer-based models, especially models additionally fine-tuned on small samples of synthetic data, can be used in place of task-specific models used for querying relational knowledge, as has already been done for targeted tasks such as word sense disambiguation BIBREF54. Since models seem to already contain considerable amounts of relational knowledge, our simple inoculation strategy, which tries to nudge models to bring out this knowledge explicitly, could serve as a cheaper alternative to recent attempts to build architectures that explicitly incorporate structured knowledge BIBREF55; we see many areas where our inoculation strategy could be improved for such purposes, including having more complex loss functions that manage old and new information, as well as using techniques that take into account network plasticity BIBREF56.
The main appeal of using automatically generate datasets is the ability to systematically manipulate and control the complexity of target questions, which allows for more controlled experimentation and new forms of evaluation. Despite the positive results described above, results that look directly at the effect of different types of distractors and the complexity of reasoning show that our best models, even after additional fine-tuning, struggle with certain categories of hard distractors and multi-hop inferences. For some probes, our cluster-based analysis also reveals that errors are widespread across concept clusters, suggesting that models are not always consistent and robust. These results, taken together with our findings about the vulnerability of synthetic datasets to systematic biases, suggest that there is much room for improvement and that the positive results should be taken with a grain of salt. Developing better ways to evaluate semantic clusters and model robustness would be a step in this direction.
We emphasize that using synthetic versus naturalistic QA data comes with important trade-offs. While we are able to generate large amounts of systematically controlled data at virtually no cost or need for manual annotation, it is much harder to validate the quality of such data at such a scale and such varying levels of complexity. Conversely, with benchmark QA datasets, it is much harder to perform the type of careful manipulations and cluster-based analyses we report here. While we assume that the expert knowledge we employ, in virtue of being hand-curated by human experts, is generally correct, we know that such resources are fallible and error-prone. Initial crowd-sourcing experiments that look at validating samples of our data show high agreement across probes and that human scores correlate with the model trends across the probe categories. More details of these studies are left for future work.
|
Are the automatically constructed datasets subject to quality control?
|
No
| 6,391
|
qasper
|
8k
|
Introduction
One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 .
From a linguistic point of view, the underlying tree structure—as expressed by its constituency and dependency trees—of a sentence is an integral part of its meaning. Inspired by this fact, some recursive neural network (RvNN) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis BIBREF5 , BIBREF6 , machine translation BIBREF7 , natural language inference BIBREF8 , and discourse relation classification BIBREF9 .
However, some recent works have BIBREF10 , BIBREF11 proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models. Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks?
In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.
A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.
In short, our contributions in this work are as follows:
Related Work
Recursive neural networks (RvNN) are a kind of neural architecture which model sentences by exploiting syntactic structure. While earlier RvNN models proposed utilizing diverse composition functions, including feed-forward neural networks BIBREF12 , matrix-vector multiplication BIBREF5 , and tensor computation BIBREF6 , tree-LSTMs BIBREF13 remain the standard for several sentence-level tasks.
Even though classic RvNNs have demonstrated superior performance on a variety of tasks, their inflexibility, i.e. their inability to handle dynamic compositionality for different syntactic configurations, is a considerable weakness. For instance, it would be desirable if our model could distinguish e.g. adjective-noun composition from that of verb-noun or preposition-noun composition, as models failing to make such a distinction ignore real-world syntactic considerations such as `-arity' of function words (i.e. types), and the adjunct/argument distinction.
To enable dynamic compositionality in recursive neural networks, many previous works BIBREF14 , BIBREF15 , BIBREF16 , BIBREF9 , BIBREF17 , BIBREF18 , BIBREF19 have proposed various methods.
One main direction of research leverages tag information, which is produced as a by-product of parsing. In detail, BIBREF16 ( BIBREF16 ) suggested TG-RNN, a model employing different composition functions according to POS tags, and TE-RNN/TE-RNTN, models which leverage tag embeddings as additional inputs for the existing tree-structured models. Despite the novelty of utilizing tag information, the explosion of the number of parameters (in case of the TG-RNN) and the limited performance of the original models (in case of the TE-RNN/TE-RNTN) have prevented these models from being widely adopted. Meanwhile, BIBREF9 ( BIBREF9 ) and BIBREF18 ( BIBREF18 ) proposed models based on a tree-LSTM which also uses the tag vectors to control the gate functions of the tree-LSTM. In spite of their impressive results, there is a limitation that the trained tag embeddings are too simple to reflect the rich information which tags provide in different syntactic structures. To alleviate this problem, we introduce structure-aware tag representations in the next section.
Another way of building dynamic compositionality into RvNNs is to take advantage of a meta-network (or hyper-network). Inspired by recent works on dynamic parameter prediction, DC-TreeLSTMs BIBREF17 dynamically create the parameters for compositional functions in a tree-LSTM. Specifically, the model has two separate tree-LSTM networks whose architectures are similar, but the smaller of the two is utilized to calculate the weights of the bigger one. A possible problem for this model is that it may be easy to be trained such that the role of each tree-LSTM is ambiguous, as they share the same input, i.e. word information. Therefore, we design two disentangled tree-LSTMs in our model so that one focuses on extracting useful features from only syntactic information while the other composes semantic units with the aid of the features. Furthermore, our model reduces the complexity of computation by utilizing typical tree-LSTM frameworks instead of computing the weights for each example.
Finally, some recent works BIBREF10 , BIBREF11 have proposed latent tree-structured models that learn how to formulate tree structures from only sequences of tokens, without the aid of syntactic trees or linguistic information. The latent tree models have the advantage of being able to find the optimized task-specific order of composition rather than a sequential or syntactic one. In experiments, we compare our model with not only syntactic tree-based models but also latent tree models, demonstrating that modeling with explicit linguistic knowledge can be an attractive option.
Model
In this section, we introduce a novel RvNN architecture, called SATA Tree-LSTM (Structure-Aware Tag Augmented Tree-LSTM). This model is similar to typical Tree-LSTMs, but provides dynamic compositionality by augmenting a separate tag-level tree-LSTM which produces structure-aware tag representations for each node in a tree. In other words, our model has two independent tree-structured modules based on the same constituency tree, one of which (word-level tree-LSTM) is responsible for constructing sentence representations given a sequence of words as usual, while the other (tag-level tree-LSTM) provides supplementary syntactic information to the former.
In section 3.1, we first review tree-LSTM architectures. Then in section 3.2, we introduce a tag-level tree-LSTM and structure-aware tag representations. In section 3.3, we discuss an additional technique to boost the performance of tree-structured models, and in section 3.4, we describe the entire architecture of our model in detail.
Tree-LSTM
The LSTM BIBREF1 architecture was first introduced as an extension of the RNN architecture to mitigate the vanishing and exploding gradient problems. In addition, several works have discovered that applying the LSTM cell into tree structures can be an effective means of modeling sentence representations.
To be formal, the composition function of the cell in a tree-LSTM can be formulated as follows:
$$ \begin{bmatrix} \mathbf {i} \\ \mathbf {f}_l \\ \mathbf {f}_r \\ \mathbf {o} \\ \mathbf {g} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \sigma \\ \tanh \end{bmatrix} \Bigg ( \mathbf {W} \begin{bmatrix} \mathbf {h}_l \\ \mathbf {h}_r \\ \end{bmatrix} + \mathbf {b} \Bigg )$$ (Eq. 8)
$$ \mathbf {c} = \mathbf {f}_l \odot \mathbf {c}_l + \mathbf {f}_r \odot \mathbf {c}_r + \mathbf {i} \odot \mathbf {g}\\$$ (Eq. 9)
where $\mathbf {h}, \mathbf {c} \in \mathbb {R}^{d}$ indicate the hidden state and cell state of the LSTM cell, and $\mathbf {h}_l, \mathbf {h}_r, \mathbf {c}_l, \mathbf {c}_r \in \mathbb {R}^{d}$ the hidden states and cell states of a left and right child. $\mathbf {g} \in \mathbb {R}^{d}$ is the newly composed input for the cell and $\mathbf {i}, \mathbf {f}_{l}, \mathbf {f}_{r}, \mathbf {o} \in \mathbb {R}^{d}$ represent an input gate, two forget gates (left, right), and an output gate respectively. $\mathbf {W} \in \mathbb {R}^{5d\times 2d}$ and $\mathbf {b} \in \mathbb {R}^{5d}$ are trainable parameters. $\sigma $ corresponds to the sigmoid function, $\tanh $ to the hyperbolic tangent, and $\odot $ to element-wise multiplication.
Note the equations assume that there are only two children for each node, i.e. binary or binarized trees, following the standard in the literature. While RvNN models can be constructed on any tree structure, in this work we only consider constituency trees as inputs.
In spite of the obvious upside that recursive models have in being so flexible, they are known for being difficult to fully utilize with batch computations as compared to other neural architectures because of the diversity of structure found across sentences. To alleviate this problem, BIBREF8 ( BIBREF8 ) proposed the SPINN model, which brings a shift-reduce algorithm to the tree-LSTM. As SPINN simplifies the process of constructing a tree into only two operations, i.e. shift and reduce, it can support more effective parallel computations while enjoying the advantages of tree structures. For efficiency, our model also starts from our own SPINN re-implementation, whose function is exactly the same as that of the tree-LSTM.
Structure-aware Tag Representation
In most previous works using linguistic tag information BIBREF16 , BIBREF9 , BIBREF18 , tags are usually represented as simple low-dimensional dense vectors, similar to word embeddings. This approach seems reasonable in the case of POS tags that are attached to the corresponding words, but phrase-level constituent tags (e.g. NP, VP, ADJP) vary greatly in size and shape, making them less amenable to uniform treatment. For instance, even the same phrase tags within different syntactic contexts can vary greatly in size and internal structure, as the case of NP tags in Figure 1 shows. Here, the NP consisting of DT[the]-NN[stories] has a different internal structure than the NP consisting of NP[the film 's]-NNS[shortcomings].
One way of deriving structure-aware tag representations from the original tag embeddings is to introduce a separate tag-level tree-LSTM which accepts the typical tag embeddings at each node of a tree and outputs the computed structure-aware tag representations for the nodes. Note that the module concentrates on extracting useful syntactic features by considering only the tags and structures of the trees, excluding word information.
Formally, we denote a tag embedding for the tag attached to each node in a tree as $\textbf {e} \in \mathbb {R}^{d_\text{T}}$ . Then, the function of each cell in the tag tree-LSTM is defined in the following way. Leaf nodes are defined by the following:
$$ \begin{bmatrix} \hat{\mathbf {c}} \\ \hat{\mathbf {h}} \\ \end{bmatrix} = \tanh {\left(\mathbf {U}_\text{T} \mathbf {e} + \mathbf {a}_\text{T}\right)}$$ (Eq. 13)
while non-leaf nodes are defined by the following:
$$ \begin{bmatrix} \hat{\mathbf {i}} \\ \hat{\mathbf {f}}_l \\ \hat{\mathbf {f}}_r \\ \hat{\mathbf {o}} \\ \hat{\mathbf {g}} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \sigma \\ \tanh \end{bmatrix} \Bigg ( \mathbf {W_\text{T}} \begin{bmatrix} \hat{\mathbf {h}}_l \\ \hat{\mathbf {h}}_r \\ \mathbf {e} \\ \end{bmatrix} + \mathbf {b}_\text{T} \Bigg )$$ (Eq. 14)
$$ \hat{\mathbf {c}} = \hat{\mathbf {f}}_l \odot \hat{\mathbf {c}}_l + \hat{\mathbf {f}}_r \odot \hat{\mathbf {c}}_r + \hat{\mathbf {i}} \odot \hat{\mathbf {g}}\\$$ (Eq. 15)
where $\hat{\mathbf {h}}, \hat{\mathbf {c}} \in \mathbb {R}^{d_\text{T}}$ represent the hidden state and cell state of each node in the tag tree-LSTM. We regard the hidden state ( $\hat{\mathbf {h}}$ ) as a structure-aware tag representation for the node. $ \mathbf {U}_\text{T} \in \mathbb {R}^{2d_\text{T} \times d_\text{T}}, \textbf {a}_\text{T} \in \mathbb {R}^{2d_\text{T}}, \mathbf {W}_\text{T} \in \mathbb {R}^{5d_\text{T} \times 3d_\text{T}}$ , and $\mathbf {b}_\text{T} \in \mathbb {R}^{5d_\text{T}}$ are trainable parameters. The rest of the notation follows equations 8 , 9 , and 10 . In case of leaf nodes, the states are computed by a simple non-linear transformation. Meanwhile, the composition function in a non-leaf node absorbs the tag embedding ( $\mathbf {e}$ ) as an additional input as well as the hidden states of the two children nodes. The benefit of revising tag representations according to the internal structure is that the derived embedding is a function of the corresponding makeup of the node, rather than a monolithic, categorical tag.
With regard to the tags themselves, we conjecture that the taxonomy of the tags currently in use in many NLP systems is too complex to be utilized effectively in deep neural models, considering the specificity of many tag sets and the limited amount of data with which to train. Thus, we cluster POS (word-level) tags into 12 groups following the universal POS tagset BIBREF20 and phrase-level tags into 11 groups according to criteria analogous to the case of words, resulting in 23 tag categories in total. In this work, we use the revised coarse-grained tags instead of the original ones. For more details, we refer readers to the supplemental materials.
Leaf-LSTM
An inherent shortcoming of RvNNs relative to sequential models is that each intermediate representation in a tree is unaware of its external context until all the information is gathered together at the root node. In other words, each composition process is prone to be locally optimized rather than globally optimized.
To mitigate this problem, we propose using a leaf-LSTM following the convention of some previous works BIBREF21 , BIBREF7 , BIBREF11 , which is a typical LSTM that accepts a sequence of words in order. Instead of leveraging word embeddings directly, we can use each hidden state and cell state of the leaf-LSTM as input tokens for leaf nodes in a tree-LSTM, anticipating the proper contextualization of the input sequence.
Formally, we denote a sequence of words in an input sentence as $w_{1:n}$ ( $n$ : the length of the sentence), and the corresponding word embeddings as $\mathbf {x}_{1:n}$ . Then, the operation of the leaf-LSTM at time $t$ can be formulated as,
$$ \begin{bmatrix} \tilde{\mathbf {i}} \\ \tilde{\mathbf {f}} \\ \tilde{\mathbf {o}} \\ \tilde{\mathbf {g}} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \tanh \end{bmatrix} \Bigg ( \mathbf {W}_\text{L} \begin{bmatrix} \tilde{\mathbf {h}}_{t-1} \\ \mathbf {x}_t \\ \end{bmatrix} + \mathbf {b}_\text{L} \Bigg )$$ (Eq. 18)
$$ \tilde{\mathbf {c}}_t = \tilde{\mathbf {f}} \odot \tilde{\mathbf {c}}_{t-1} + \tilde{\mathbf {i}} \odot \tilde{\mathbf {g}}\\$$ (Eq. 19)
where $\mathbf {x}_t \in \mathbb {R}^{d_w}$ indicates an input word vector and $\tilde{\mathbf {h}}_t$ , $\tilde{\mathbf {c}}_t \in \mathbb {R}^{d_h}$ represent the hidden and cell state of the LSTM at time $t$ ( $\tilde{\mathbf {h}}_{t-1}$ corresponds to the hidden state at time $t$ -1). $\mathbf {W}_\text{L}$ and $\mathbf {b}_\text{L} $ are learnable parameters. The remaining notation follows that of the tree-LSTM above.
In experiments, we demonstrate that introducing a leaf-LSTM fares better at processing the input words of a tree-LSTM compared to using a feed-forward neural network. We also explore the possibility of its bidirectional setting in ablation study.
SATA Tree-LSTM
In this section, we define SATA Tree-LSTM (Structure-Aware Tag Augmented Tree-LSTM, see Figure 2 ) which joins a tag-level tree-LSTM (section 3.2), a leaf-LSTM (section 3.3), and the original word tree-LSTM together.
As above we denote a sequence of words in an input sentence as $w_{1:n}$ and the corresponding word embeddings as $\mathbf {x}_{1:n}$ . In addition, a tag embedding for the tag attached to each node in a tree is denoted by $\textbf {e} \in \mathbb {R}^{d_\text{T}}$ . Then, we derive the final sentence representation for the input sentence with our model in two steps.
First, we compute structure-aware tag representations ( $\hat{\mathbf {h}}$ ) for each node of a tree using the tag tree-LSTM (the right side of Figure 2 ) as follows:
$$ \begin{bmatrix} \hat{\mathbf {c}} \\ \hat{\mathbf {h}} \\ \end{bmatrix} = {\left\lbrace \begin{array}{ll} \text{Tag-Tree-LSTM}(\mathbf {e}) & \text{if a leaf node} \\ \text{Tag-Tree-LSTM}(\hat{\mathbf {h}}_l, \hat{\mathbf {h}}_r, \mathbf {e}) & \text{otherwise} \end{array}\right.}$$ (Eq. 23)
where Tag-Tree-LSTM indicates the module we described in section 3.2.
Second, we combine semantic units recursively on the word tree-LSTM in a bottom-up fashion. For leaf nodes, we leverage the Leaf-LSTM (the bottom-left of Figure 2 , explained in section 3.3) to compute $\tilde{\mathbf {c}}_{t}$ and $\tilde{\mathbf {h}}_{t}$ in sequential order, with the corresponding input $\mathbf {x}_t$ .
$$ \begin{bmatrix} \tilde{\mathbf {c}}_{t} \\ \tilde{\mathbf {h}}_{t} \\ \end{bmatrix} = \text{Leaf-LSTM}(\tilde{\textbf {h}}_{t-1}, \textbf {x}_t)$$ (Eq. 24)
Then, the $\tilde{\mathbf {c}}_{t}$ and $\tilde{\mathbf {h}}_{t}$ can be utilized as input tokens to the word tree-LSTM, with the left (right) child of the target node corresponding to the $t$ th word in the input sentence.
$$ \begin{bmatrix} \check{\textbf {c}}_{\lbrace l, r\rbrace } \\ \check{\textbf {h}}_{\lbrace l, r\rbrace } \end{bmatrix} = \begin{bmatrix} \tilde{\textbf {c}}_{t} \\ \tilde{\textbf {h}}_{t} \end{bmatrix}$$ (Eq. 25)
In the non-leaf node case, we calculate phrase representations for each node in the word tree-LSTM (the upper-left of Figure 2 ) recursively as follows:
$$ \check{\mathbf {g}} = \tanh {\left( \mathbf {U}_\text{w} \begin{bmatrix} \check{\mathbf {h}}_l \\ \check{\mathbf {h}}_r \\ \end{bmatrix} + \mathbf {a}_\text{w} \right)}$$ (Eq. 26)
$$ \begin{bmatrix} \check{\mathbf {i}} \\ \check{\mathbf {f}}_l \\ \check{\mathbf {f}}_r \\ \check{\mathbf {o}} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \sigma \end{bmatrix} \Bigg ( \mathbf {W_\text{w}} \begin{bmatrix} \check{\mathbf {h}}_l \\ \check{\mathbf {h}}_r \\ \hat{\mathbf {h}} \\ \end{bmatrix} + \mathbf {b}_\text{w} \Bigg )$$ (Eq. 27)
where $\check{\mathbf {h}}$ , $\check{\mathbf {c}} \in \mathbb {R}^{d_h}$ represent the hidden and cell state of each node in the word tree-LSTM. $\mathbf {U}_\text{w} \in \mathbb {R}^{d_h \times 2d_h}$ , $\mathbf {W}_\text{w} \in \mathbb {R}^{4d_h \times \left(2d_h+d_\text{T}\right)}$ , $\mathbf {a}_\text{w} \in \mathbb {R}^{d_h}$ , $\mathbf {b}_\text{w} \in \mathbb {R}^{4d_h}$ are learned parameters. The remaining notation follows those of the previous sections. Note that the structure-aware tag representations ( $\hat{\mathbf {h}}$ ) are only utilized to control the gate functions of the word tree-LSTM in the form of additional inputs, and are not involved in the semantic composition ( $\check{\mathbf {g}}$ ) directly.
Finally, the hidden state of the root node ( $\check{\mathbf {h}}_\text{root}$ ) in the word-level tree-LSTM becomes the final sentence representation of the input sentence.
Quantitative Analysis
One of the most basic approaches to evaluate a sentence encoder is to measure the classification performance with the sentence representations made by the encoder. Thus, we conduct experiments on the following five datasets. (Summary statistics for the datasets are reported in the supplemental materials.)
MR: A group of movie reviews with binary (positive / negative) classes. BIBREF22
SST-2: Stanford Sentiment Treebank BIBREF6 . Similar to MR, but each review is provided in the form of a binary parse tree whose nodes are annotated with numeric sentiment values. For SST-2, we only consider binary (positive / negative) classes.
SST-5: Identical to SST-2, but the reviews are grouped into fine-grained (very negative, negative, neutral, positive, very positive) classes.
SUBJ: Sentences grouped as being either subjective or objective (binary classes). BIBREF23
TREC: A dataset which groups questions into six different question types (classes). BIBREF24
As a preprocessing step, we construct parse trees for the sentences in the datasets using the Stanford PCFG parser BIBREF25 . Because syntactic tags are by-products of constituency parsing, we do not need further preprocessing.
To classify the sentence given our sentence representation ( $\check{\mathbf {h}}_\text{root}$ ), we use one fully-connected layer with a ReLU activation, followed by a softmax classifier. The final predicted probability distribution of the class $y$ given the sentence $w_{1:n}$ is defined as follows,
$$\mathbf {s} = \text{ReLU}(\mathbf {W}_\text{s} \check{\mathbf {h}}_\text{root}+ \mathbf {b}_\text{s})$$ (Eq. 37)
$$p(y|w_{1:n}) = \text{softmax}(\mathbf {W}_\text{c}\mathbf {s} + \mathbf {b}_\text{c})$$ (Eq. 38)
where $\textbf {s} \in \mathbb {R}^{d_\text{s}}$ is the computed task-specific sentence representation for the classifier, and $\textbf {W}_\text{s} \in \mathbb {R}^{d_\text{s} \times d_h}$ , $\textbf {W}_\text{c} \in \mathbb {R}^{d_\text{c} \times d_s}$ , $\textbf {b}_\text{s} \in \mathbb {R}^{d_s}$ , $\textbf {b}_\text{c} \in \mathbb {R}^{d_c}$ are trainable parameters. As an objective function, we use the cross entropy of the predicted and true class distributions.
The results of the experiments on the five datasets are shown in table 1 . In this table, we report the test accuracy of our model and various other models on each dataset in terms of percentage. To consider the effects of random initialization, we report the best numbers obtained from each several runs with hyper-parameters fixed.
Compared with the previous syntactic tree-based models as well as other neural models, our SATA Tree-LSTM shows superior or competitive performance on all tasks. Specifically, our model achieves new state-of-the-art results within the tree-structured model class on 4 out of 5 sentence classification tasks—SST-2, SST-5, MR, and TREC. The model shows its strength, in particular, when the datasets provide phrase-level supervision to facilitate tree structure learning (i.e. SST-2, SST-5). Moreover, the numbers we report for SST-5 and TREC are competitive to the existing state-of-the-art results including ones from structurally pre-trained models such as ELMo BIBREF26 , proving our model's superiority. Note that the SATA Tree-LSTM also outperforms the recent latent tree-based model, indicating that modeling a neural model with explicit linguistic knowledge can be an attractive option.
On the other hand, a remaining concern is that our SATA Tree-LSTM is not robust to random seeds when the size of a dataset is relatively small, as tag embeddings are randomly initialized rather than relying on pre-trained ones in contrast with the case of words. From this observation, we could find out there needs a direction of research towards pre-trained tag embeddings.
To estimate the performance of our model beyond the tasks requiring only one sentence at a time, we conduct an experiment on the Stanford Natural Language Inference BIBREF34 dataset, each example of which consists of two sentences, the premise and the hypothesis. Our objective given the data is to predict the correct relationship between the two sentences among three options— contradiction, neutral, or entailment.
We use the siamese architecture to encode both the premise ( $p_{1:m}$ ) and hypothesis ( $h_{1:n}$ ) following the standard of sentence-encoding models in the literature. (Specifically, $p_{1:m}$ is encoded as $\check{\mathbf {h}}_\text{root}^p \in \mathbb {R}^{d_h}$ and $h_{1:n}$ is encoded as $\check{\mathbf {h}}_\text{root}^h \in \mathbb {R}^{d_h}$ with the same encoder.) Then, we leverage some heuristics BIBREF35 , followed by one fully-connected layer with a ReLU activation and a softmax classifier. Specifically,
$$\mathbf {z} = \left[ \check{\mathbf {h}}_\text{root}^p; \check{\mathbf {h}}_\text{root}^h; | \check{\mathbf {h}}_\text{root}^p - \check{\mathbf {h}}_\text{root}^h |; \check{\mathbf {h}}_\text{root}^p \odot \check{\mathbf {h}}_\text{root}^h \right]$$ (Eq. 41)
$$\mathbf {s} = \text{ReLU}(\mathbf {W}_\text{s} \mathbf {z} + \mathbf {b}_\text{s})$$ (Eq. 42)
where $\textbf {z} \in \mathbb {R}^{4d_h}$ , $\textbf {s} \in \mathbb {R}^{d_s}$ are intermediate features for the classifier and $\textbf {W}_\text{s} \in \mathbb {R}^{d_\text{s} \times 4d_h}$ , $\textbf {W}_\text{c} \in \mathbb {R}^{d_\text{c} \times d_s}$ , $\textbf {b}_\text{s} \in \mathbb {R}^{d_s}$ , $\textbf {b}_\text{c} \in \mathbb {R}^{d_c}$ are again trainable parameters.
Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model.
Even though our model has proven its mettle, the effect of tag information seems relatively weak in the case of SNLI, which contains a large amount of data compared to the others. One possible explanation is that neural models may learn some syntactic rules from large amounts of text when the text size is large enough, reducing the necessity of external linguistic knowledge. We leave the exploration of the effectiveness of tags relative to data size for future work.
Here we go over the settings common across our models during experimentation. For more task-specific details, refer to the supplemental materials.
For our input embeddings, we used 300 dimensional 840B GloVe BIBREF39 as pre-trained word embeddings, and tag representations were randomly sampled from the uniform distribution [-0.005, 0.005]. Tag vectors are revised during training while the fine-tuning of the word embedding depends on the task. Our models were trained using the Adam BIBREF40 or Adadelta BIBREF41 optimizer, depending on task. For regularization, weight decay is added to the loss function except for SNLI following BIBREF42 ( BIBREF42 ) and Dropout BIBREF43 is also applied for the word embeddings and task-specific classifiers. Moreover, batch normalization BIBREF44 is adopted for the classifiers. As a default, all the weights in the model are initialized following BIBREF45 ( BIBREF45 ) and the biases are set to 0. The total norm of the gradients of the parameters is clipped not to be over 5 during training.
Our best models for each dataset were chosen by validation accuracy in cases where a validation set was provided as a part of the dataset. Otherwise, we perform a grid search on probable hyper-parameter settings, or run 10-fold cross-validation in cases where even a test set does not exist.
Ablation Study
In this section, we design an ablation study on the core modules of our model to explore their effectiveness. The dataset used in this experiment is SST-2. To conduct the experiment, we only replace the target module with other candidates while maintaining the other settings. To be specific, we focus on two modules, the leaf-LSTM and structure-aware tag embeddings (tag-level tree-LSTM). In the first case, the leaf-LSTM is replaced with a fully-connected layer with a $\tanh $ activation or Bi-LSTM. In the second case, we replace the structure-aware tag embeddings with naive tag embeddings or do not employ them at all.
The experimental results are depicted in Figure 3 . As the chart shows, our model outperforms all the other options we have considered. In detail, the left part of the chart shows that the leaf-LSTM is the most effective option compared to its competitors. Note that the sequential leaf-LSTM is somewhat superior or competitive than the bidirectional leaf-LSTM when both have a comparable number of parameters. We conjecture this may because a backward LSTM does not add additional useful knowledge when the structure of a sentence is already known. In conclusion, we use the uni-directional LSTM as a leaf module because of its simplicity and remarkable performance.
Meanwhile, the right part of the figure demonstrates that our newly introduced structure-aware embeddings have a real impact on improving the model performance. Interestingly, employing the naive tag embeddings made no difference in terms of the test accuracy, even though the absolute validation accuracy increased (not reported in the figure). This result supports our assumption that tag information should be considered in the structure.
Qualitative Analysis
In previous sections, we have numerically demonstrated that our model is effective in encouraging useful composition of semantic units. Here, we directly investigate the computed representations for each node of a tree, showing that the remarkable performance of our model is mainly due to the gradual and recursive composition of the intermediate representations on the syntactic structure.
To observe the phrase-level embeddings at a glance, we draw a scatter plot in which a point represents the corresponding intermediate representation. We utilize PCA (Principal Component Analysis) to project the representations into a two-dimensional vector space. As a target parse tree, we reuse the one seen in Figure 1 . The result is shown in Figure 4 .
From this figure, we confirm that the intermediate representations have a hierarchy in the semantic space, which is very similar to that of the parse tree. In other words, as many tree-structured models pursue, we can see the tendency of constructing the representations from the low-level (the bottom of the figure) to the high-level (the top-left and top-right of the figure), integrating the meaning of the constituents recursively. An interesting thing to note is that the final sentence representation is near that of the phrase `, the stories are quietly moving.' rather than that of `Despite the film's shortcomings', catching the main meaning of the sentence.
Conclusion
We have proposed a novel RvNN architecture to fully utilize linguistic priors. A newly introduced tag-level tree-LSTM demonstrates that it can effectively control the composition function of the corresponding word-level tree-LSTM. In addition, the proper contextualization of the input word vectors results in significant performance improvements on several sentence-level tasks. For future work, we plan to explore a new way of exploiting dependency trees effectively, similar to the case of constituency trees.
Acknowledgments
We thank anonymous reviewers for their constructive and fruitful comments. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF2016M3C4A7952587).
|
Which baselines did they compare against?
|
Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks
| 4,781
|
qasper
|
8k
|
Introduction
Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to.
The main focus of this work is to improve the relation detection subtask and further explore how it can contribute to the KBQA system. Although general relation detection methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M BIBREF2 , contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions BIBREF2 data set has 14% of the golden test relations not observed in golden training tuples. Third, as shown in Figure 1 (b), for some KBQA tasks like WebQuestions BIBREF0 , we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks.
This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching.
In order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers.
Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks.
Background: Different Granularity in KB Relations
Previous research BIBREF4 , BIBREF20 formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work.
(1) Relation Name as a Single Token (relation-level). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure 1 , when treating relation names as single tokens, it will be difficult to match the questions to relation names “episodes_written” and “starring_roles” if these names do not appear in training data – their relation embeddings $\mathbf {h}^r$ s will be random vectors thus are not comparable to question embeddings $\mathbf {h}^q$ s.
(2) Relation as Word Sequence (word-level). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure 1 (b), when doing only word-level matching, it is difficult to rank the target relation “starring_roles” higher compared to the incorrect relation “plays_produced”. This is because the incorrect relation contains word “plays”, which is more similar to the question (containing word “play”) in the embedding space. On the other hand, if the target relation co-occurs with questions related to “tv appearance” in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like “tv show” and “play on”.
The two types of relation representation contain different levels of abstraction. As shown in Table 1 , the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section "Improved KB Relation Detection" gives the details of our proposed approach.
Improved KB Relation Detection
This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations.
Relation Representations from Different Granularity
We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\mathbf {r}=\lbrace r^{word}_1,\cdots ,r^{word}_{M_1}\rbrace \cup \lbrace r^{rel}_1,\cdots ,r^{rel}_{M_2}\rbrace $ , where the first $M_1$ tokens are words (e.g. {episode, written}), and the last $M_2$ tokens are relation names, e.g., {episode_written} or {starring_roles, series} (when the target is a chain like in Figure 1 (b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\mathbf {B}^{word}_{1:M_1}:\mathbf {B}^{rel}_{1:M_2}]$ (each row vector $\mathbf {\beta }_i$ is the concatenation between forward/backward representations at $i$ ). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply one max-pooling on these two sets of vectors and get the final relation representation $\mathbf {h}^r$ .
Different Abstractions of Questions Representations
From Table 1 , we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths.
As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\mathbf {q}=\lbrace q_1,\cdots ,q_N\rbrace $ and gets hidden representations $\mathbf {\Gamma }^{(1)}_{1:N}=[\mathbf {\gamma }^{(1)}_1;\cdots ;\mathbf {\gamma }^{(1)}_N]$ . The second-layer BiLSTM works on $\mathbf {\Gamma }^{(1)}_{1:N}$ to get the second set of hidden representations $\mathbf {\Gamma }^{(2)}_{1:N}$ . Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer.
Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem.
Hierarchical Matching between Relation and Question
Now we have question contexts of different lengths encoded in $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\Gamma }^{(2)}_{1:N}$ . Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (Hierarchical Matching). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table 1 , the relation word written could be matched to either the same single word in the question or a much longer phrase be the writer of.
We could perform the above hierarchical matching by computing the similarity between each layer of $\mathbf {\Gamma }$ and $\mathbf {h}^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table 2 ). Our analysis in Section "Relation Detection Results" shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult.
To overcome the above difficulties, we adopt the idea from Residual Networks BIBREF23 for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such Hierarchical Residual Matching: (1) Connecting each $\mathbf {\gamma }^{(1)}_i$ and $\mathbf {\gamma }^{(2)}_i$ , resulting in a $\mathbf {\gamma }^{^{\prime }}_i=\mathbf {\gamma }^{(1)}_i + \mathbf {\gamma }^{(2)}_i$ for each position $i$ . Then the final question representation $\mathbf {h}^q$ becomes a max-pooling over all $\mathbf {\gamma }^{^{\prime }}_i$ s, 1 $\le $ i $\le $ $N$ . (2) Applying max-pooling on $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\gamma }^{(2)}_i$0 to get $\mathbf {\gamma }^{(2)}_i$1 and $\mathbf {\gamma }^{(2)}_i$2 , respectively, then setting $\mathbf {\gamma }^{(2)}_i$3 . Finally we compute the matching score of $\mathbf {\gamma }^{(2)}_i$4 given $\mathbf {\gamma }^{(2)}_i$5 as $\mathbf {\gamma }^{(2)}_i$6 .
Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier.
During training we adopt a ranking loss to maximizing the margin between the gold relation $\mathbf {r}^+$ and other relations $\mathbf {r}^-$ in the candidate pool $R$ .
$$l_{\mathrm {rel}} = \max \lbrace 0, \gamma - s_{\mathrm {rel}}(\mathbf {r}^+; \mathbf {q}) + s_{\mathrm {rel}}(\mathbf {r}^-; \mathbf {q})\rbrace \nonumber $$ (Eq. 12)
where $\gamma $ is a constant parameter. Fig 2 summarizes the above Hierarchical Residual BiLSTM (HR-BiLSTM) model.
Another way of hierarchical matching consists in relying on attention mechanism, e.g. BIBREF24 , to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table 2 ).
KBQA Enhanced by Relation Detection
This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build.
Following previous work BIBREF4 , BIBREF5 , our KBQA system takes an existing entity linker to produce the top- $K$ linked entities, $EL_K(q)$ , for a question $q$ (“initial entity linking”). Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm "KBQA Enhanced by Relation Detection" .
[htbp] InputInput OutputOutput Top query tuple $(\hat{e},\hat{r}, \lbrace (c, r_c)\rbrace )$ Entity Re-Ranking (first-step relation detection): Use the raw question text as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$ ; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL^{\prime }_{K^{\prime }}(q)$ containing the top- $K^{\prime }$ entity candidates (Section "Entity Re-Ranking" ) Relation Detection: Detect relation(s) using the reformatted question text in which the topic entity is replaced by a special token $<$ e $>$ (Section "Relation Detection" ) Query Generation: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section "Query Generation" ) Constraint Detection (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $EL_K(q)$0 (connecting by a relation $EL_K(q)$1 ) , add the high scoring $EL_K(q)$2 and $EL_K(q)$3 to the query (Section "Constraint Detection" ). KBQA with two-step relation detection
Compared to previous approaches, the main difference is that we have an additional entity re-ranking step after the initial entity linking. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig 1 (a), there are TV writer and baseball player “Mike Kelley”, which is impossible to distinguish with only entity name matching.
Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the initial entity linking with relations detected in questions.
Sections "Entity Re-Ranking" and "Relation Detection" elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process.
Entity Re-Ranking
In this step, we use the raw question text as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$ . We call this step relation detection on entity set since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. "Improved KB Relation Detection" . For each question $q$ , after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ( $R^{l}_q$ ) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$ , given the original entity linker score $s_{linker}$ , and the score of the most confident relation $r\in R_q^{l} \cap R_e$ , we sum these two scores to re-rank the entities:
$$s_{\mathrm {rerank}}(e;q) =& \alpha \cdot s_{\mathrm {linker}}(e;q) \nonumber \\ + & (1-\alpha ) \cdot \max _{r \in R_q^{l} \cap R_e} s_{\mathrm {rel}}(r;q).\nonumber $$ (Eq. 15)
Finally, we select top $K^{\prime }$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K^{\prime }}^{^{\prime }}(q)$ .
We use the same example in Fig 1 (a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as “episodes_written”, “author_of” and “profession”. Then, according to the connections of entity candidates in KB, we find that the TV writer “Mike Kelley” will be scored higher than the baseball player “Mike Kelley”, because the former has the relations “episodes_written” and “profession”. This method can be viewed as exploiting entity-relation collocation for entity linking.
Relation Detection
In this step, for each candidate entity $e \in EL_K^{\prime }(q)$ , we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB. Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$ 's entity mention in $q$ with a token “ $<$ e $>$ ”. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$ : $s_{rel} (r;e,q)$ .
Query Generation
Finally, the system outputs the $<$ entity, relation (or core-chain) $>$ pair $(\hat{e}, \hat{r})$ according to:
$$s(\hat{e}, \hat{r}; q) =& \max _{e \in EL_{K^{\prime }}^{^{\prime }}(q), r \in R_e} \left( \beta \cdot s_{\mathrm {rerank}}(e;q) \right. \nonumber \\ &\left.+ (1-\beta ) \cdot s_{\mathrm {rel}} (r;e,q) \right), \nonumber $$ (Eq. 19)
where $\beta $ is a hyperparameter to be tuned.
Constraint Detection
Similar to BIBREF4 , we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) Sub-graph generation: given the top scored query generated by the previous 3 steps, for each node $v$ (answer node or the CVT node like in Figure 1 (b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$ ) with any relation, and generate a sub-graph associated to the original query. (2) Entity-linking on sub-graph nodes: we compute a matching score between each $n$ -gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta $ (tuned on training set), we will add the constraint entity $c$ (and $r_c$ ) to the query by attaching it to the corresponding node $v$ on the core-chain.
Experiments
Task Introduction & Settings
We use the SimpleQuestions BIBREF2 and WebQSP BIBREF25 datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task.
SimpleQuestions (SQ): It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) BIBREF2 , in order to compare with previous research. yin2016simple also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results. Therefore, our results can be compared with their reported results on both tasks.
WebQSP (WQ): A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following yih-EtAl:2016:P16-2, we use S-MART BIBREF26 entity-linking outputs. In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set. For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\le $ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples.
We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs ({50, 100, 200, 400}); (2) learning rate ({0.1, 0.5, 1.0, 2.0}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section "Hierarchical Matching between Relation and Question" ); and (4) the number of training epochs.
For both the relation detection experiments and the second-step relation detection in KBQA, we have entity replacement first (see Section "Relation Detection" and Figure 1 ). All word vectors are initialized with 300- $d$ pretrained word embeddings BIBREF27 . The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work.
Relation Detection Results
Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively).
Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions.
The bottom of Table 2 shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3% vs. 91.2/88.8%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section "Hierarchical Matching between Relation and Question" ). For the attention-based baseline, we tried the model from BIBREF24 and its one-way variations, where the one-way model gives better results. Note that residual learning significantly helps on WebQSP (80.65% to 82.53%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching.
Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies.
Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94%) compared to HR-BiLSTM (95.67%), suffering from training difficulty.
Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, training deep BiLSTMs is more difficult without shortcut connections. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99%, even lower than the 95.25% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty.
Finally, we hypothesize that HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM.
KBQA End-Task Results
Table 3 compares our system with two published baselines (1) STAGG BIBREF4 , the state-of-the-art on WebQSP and (2) AMPCNN BIBREF20 , the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table 3 ).
Compared to the baseline relation detector (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.
The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in BIBREF4 , constraint detection is crucial for our system. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5% top-1 accuracy), leaving a huge potential (77.5% vs. 58.0%) for the constraint detection module to improve.
Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section "Relation Detection Results" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP.
Conclusion
KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in BIBREF31 , to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions BIBREF32 and ComplexQuestions BIBREF30 to handle more characteristics of general QA.
|
What is te core component for KBQA?
|
answer questions by obtaining information from KB tuples
| 4,527
|
qasper
|
8k
|
Introduction
Distributed word representations, commonly referred to as word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , serve as elementary building blocks in the course of algorithm design for an expanding range of applications in natural language processing (NLP), including named entity recognition BIBREF4 , BIBREF5 , parsing BIBREF6 , sentiment analysis BIBREF7 , BIBREF8 , and word-sense disambiguation BIBREF9 . Although the empirical utility of word embeddings as an unsupervised method for capturing the semantic or syntactic features of a certain word as it is used in a given lexical resource is well-established BIBREF10 , BIBREF11 , BIBREF12 , an understanding of what these features mean remains an open problem BIBREF13 , BIBREF14 and as such word embeddings mostly remain a black box. It is desirable to be able to develop insight into this black box and be able to interpret what it means, while retaining the utility of word embeddings as semantically-rich intermediate representations. Other than the intrinsic value of this insight, this would not only allow us to explain and understand how algorithms work BIBREF15 , but also set a ground that would facilitate the design of new algorithms in a more deliberate way.
Recent approaches to generating word embeddings (e.g. BIBREF0 , BIBREF2 ) are rooted linguistically in the field of distributed semantics BIBREF16 , where words are taken to assume meaning mainly by their degree of interaction (or lack thereof) with other words in the lexicon BIBREF17 , BIBREF18 . Under this paradigm, dense, continuous vector representations are learned in an unsupervised manner from a large corpus, using the word cooccurrence statistics directly or indirectly, and such an approach is shown to result in vector representations that mathematically capture various semantic and syntactic relations between words BIBREF0 , BIBREF2 , BIBREF3 . However, the dense nature of the learned embeddings obfuscate the distinct concepts encoded in the different dimensions, which renders the resulting vectors virtually uninterpretable. The learned embeddings make sense only in relation to each other and their specific dimensions do not carry explicit information that can be interpreted. However, being able to interpret a word embedding would illuminate the semantic concepts implicitly represented along the various dimensions of the embedding, and reveal its hidden semantic structures.
In the literature, researchers tackled interpretability problem of the word embeddings using different approaches. Several researchers BIBREF19 , BIBREF20 , BIBREF21 proposed algorithms based on non-negative matrix factorization (NMF) applied to cooccurrence variant matrices. Other researchers suggested to obtain interpretable word vectors from existing uninterpretable word vectors by applying sparse coding BIBREF22 , BIBREF23 , by training a sparse auto-encoder to transform the embedding space BIBREF24 , by rotating the original embeddings BIBREF25 , BIBREF26 or by applying transformations based on external semantic datasets BIBREF27 .
Although the above-mentioned approaches provide better interpretability that is measured using a particular method such as word intrusion test, usually the improved interpretability comes with a cost of performance in the benchmark tests such as word similarity or word analogy. One possible explanation for this performance decrease is that the proposed transformations from the original embedding space distort the underlying semantic structure constructed by the original embedding algorithm. Therefore, it can be claimed that a method that learns dense and interpretable word embeddings without inflicting any damage to the underlying semantic learning mechanism is the key to achieve both high performing and interpretable word embeddings.
Especially after the introduction of the word2vec algorithm by Mikolov BIBREF0 , BIBREF1 , there has been a growing interest in algorithms that generate improved word representations under some performance metric. Significant effort is spent on appropriately modifying the objective functions of the algorithms in order to incorporate knowledge from external resources, with the purpose of increasing the performance of the resulting word representations BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 . Inspired by the line of work reported in these studies, we propose to use modified objective functions for a different purpose: learning more interpretable dense word embeddings. By doing this, we aim to incorporate semantic information from an external lexical resource into the word embedding so that the embedding dimensions are aligned along predefined concepts. This alignment is achieved by introducing a modification to the embedding learning process. In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to. For words that do not belong to any one of the word-groups, the cost term is left untouched. Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method. We quantitatively demonstrate the increase in interpretability by using the measure given in BIBREF27 , BIBREF40 as well as demonstrating qualitative results. We also show that the semantic structure of the original embedding has not been harmed in the process since there is no performance loss with standard word-similarity or word-analogy tests.
The paper is organized as follows. In Section SECREF2 , we discuss previous studies related to our work under two main categories: interpretability of word embeddings and joint-learning frameworks where the objective function is modified. In Section SECREF3 , we present the problem framework and provide the formulation within the GloVe BIBREF2 algorithm setting. In Section SECREF4 where our approach is proposed, we motivate and develop a modification to the original objective function with the aim of increasing representation interpretability. In Section SECREF5 , experimental results are provided and the proposed method is quantitatively and qualitatively evaluated. Additionally, in Section SECREF5 , results demonstrating the extent to which the original semantic structure of the embedding space is affected are presented by using word-analogy and word-similarity tests. We conclude the paper in Section SECREF6 .
Related Work
Methodologically, our work is related to prior studies that aim to obtain “improved” word embeddings using external lexical resources, under some performance metric. Previous work in this area can be divided into two main categories: works that i) modify the word embedding learning algorithm to incorporate lexical information, ii) operate on pre-trained embeddings with a post-processing step.
Among works that follow the first approach, BIBREF28 extend the Skip-Gram model by incorporating the word similarity relations extracted from the Paraphrase Database (PPDB) and WordNet BIBREF29 , into the Skip-Gram predictive model as an additional cost term. In BIBREF30 , the authors extend the CBOW model by considering two types of semantic information, termed relational and categorical, to be incorporated into the embeddings during training. For the former type of semantic information, the authors propose the learning of explicit vectors for the different relations extracted from a semantic lexicon such that the word pairs that satisfy the same relation are distributed more homogeneously. For the latter, the authors modify the learning objective such that some weighted average distance is minimized for words under the same semantic category. In BIBREF31 , the authors represent the synonymy and hypernymy-hyponymy relations in terms of inequality constraints, where the pairwise similarity rankings over word triplets are forced to follow an order extracted from a lexical resource. Following their extraction from WordNet, the authors impose these constraints in the form of an additive cost term to the Skip-Gram formulation. Finally, BIBREF32 builds on top of the GloVe algorithm by introducing a regularization term to the objective function that encourages the vector representations of similar words as dictated by WordNet to be similar as well.
Turning our attention to the post-processing approach for enriching word embeddings with external lexical knowledge, BIBREF33 has introduced the retrofitting algorithm that acts on pre-trained embeddings such as Skip-Gram or GloVe. The authors propose an objective function that aims to balance out the semantic information captured in the pre-trained embeddings with the constraints derived from lexical resources such as WordNet, PPDB and FrameNet. One of the models proposed in BIBREF34 extends the retrofitting approach to incorporate the word sense information from WordNet. Similarly, BIBREF35 creates multi-sense embeddings by gathering the word sense information from a lexical resource and learning to decompose the pre-trained embeddings into a convex combination of sense embeddings. In BIBREF36 , the authors focus on improving word embeddings for capturing word similarity, as opposed to mere relatedness. To this end, they introduce the counter-fitting technique which acts on the input word vectors such that synonymous words are attracted to one another whereas antonymous words are repelled, where the synonymy-antonymy relations are extracted from a lexical resource. More recently, the ATTRACT-REPEL algorithm proposed by BIBREF37 improves on counter-fitting by a formulation which imparts the word vectors with external lexical information in mini-batches.
Most of the studies discussed above ( BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF36 , BIBREF37 ) report performance improvements in benchmark tests such as word similarity or word analogy, while BIBREF29 uses a different analysis method (mean reciprocal rank). In sum, the literature is rich with studies aiming to obtain word embeddings that perform better under specific performance metrics. However, less attention has been directed to the issue of interpretability of the word embeddings. In the literature, the problem of interpretability has been tackled using different approaches. BIBREF19 proposed non-negative matrix factorization (NMF) for learning sparse, interpretable word vectors from co-occurrence variant matrices where the resulting vector space is called non-negative sparse embeddigns (NNSE). However, since NMF methods require maintaining a global matrix for learning, they suffer from memory and scale issue. This problem has been addressed in BIBREF20 where an online method of learning interpretable word embeddings from corpora using a modified version of skip-gram model BIBREF0 is proposed. As a different approach, BIBREF21 combined text-based similarity information among words with brain activity based similarity information to improve interpretability using joint non-negative sparse embedding (JNNSE).
A common alternative approach for learning interpretable embeddings is to learn transformations that map pre-trained state-of-the-art embeddings to new interpretable semantic spaces. To obtain sparse, higher dimensional and more interpretable vector spaces, BIBREF22 and BIBREF23 use sparse coding on conventional dense word embeddings. However, these methods learn the projection vectors that are used for the transformation from the word embeddings without supervision. For this reason, labels describing the corresponding semantic categories cannot be provided. An alternative approach was proposed in BIBREF25 , where orthogonal transformations were utilized to increase interpretability while preserving the performance of the underlying embedding. However, BIBREF25 has also shown that total interpretability of an embedding is kept constant under any orthogonal transformation and it can only be redistributed across the dimensions. Rotation algorithms based on exploratory factor analysis (EFA) to preserve the performance of the original word embeddings while improving their interpretability was proposed in BIBREF26 . BIBREF24 proposed to deploy a sparse auto-encoder using pre-trained dense word embeddings to improve interpretability. More detailed investigation of semantic structure and interpretability of word embeddings can be found in BIBREF27 , where a metric was proposed to quantitatively measure the degree of interpretability already present in the embedding vector spaces.
Previous works on interpretability mentioned above, except BIBREF21 , BIBREF27 and our proposed method, do not need external resources, utilization of which has both advantages and disadvantages. Methods that do not use external resources require fewer resources but they also lack the aid of information extracted from these resources.
Problem Description
For the task of unsupervised word embedding extraction, we operate on a discrete collection of lexical units (words) INLINEFORM0 that is part of an input corpus INLINEFORM1 , with number of tokens INLINEFORM2 , sourced from a vocabulary INLINEFORM3 of size INLINEFORM4 . In the setting of distributional semantics, the objective of a word embedding algorithm is to maximize some aggregate utility over the entire corpus so that some measure of “closeness” is maximized for pairs of vector representations INLINEFORM14 for words which, on the average, appear in proximity to one another. In the GloVe algorithm BIBREF2 , which we base our improvements upon, the following objective function is considered: DISPLAYFORM0
In ( EQREF6 ), INLINEFORM0 and INLINEFORM1 stand for word and context vector representations, respectively, for words INLINEFORM2 and INLINEFORM3 , while INLINEFORM4 represents the (possibly weighted) cooccurrence count for the word pair INLINEFORM5 . Intuitively, ( EQREF6 ) represents the requirement that if some word INLINEFORM6 occurs often enough in the context (or vicinity) of another word INLINEFORM7 , then the corresponding word representations should have a large enough inner product in keeping with their large INLINEFORM8 value, up to some bias terms INLINEFORM9 ; and vice versa. INLINEFORM10 in ( EQREF6 ) is used as a discounting factor that prohibits rare cooccurrences from disproportionately influencing the resulting embeddings.
The objective ( EQREF6 ) is minimized using stochastic gradient descent by iterating over the matrix of cooccurrence records INLINEFORM0 . In the GloVe algorithm, for a given word INLINEFORM1 , the final word representation is taken to be the average of the two intermediate vector representations obtained from ( EQREF6 ); i.e, INLINEFORM2 . In the next section, we detail the enhancements made to ( EQREF6 ) for the purposes of enhanced interpretability, using the aforementioned framework as our basis.
Imparting Interpretability
Our approach falls into a joint-learning framework where the distributional information extracted from the corpus is allowed to fuse with the external lexicon-based information. Word-groups extracted from Roget's Thesaurus are directly mapped to individual dimensions of word embeddings. Specifically, the vector representations of words that belong to a particular group are encouraged to have deliberately increased values in a particular dimension that corresponds to the word-group under consideration. This can be achieved by modifying the objective function of the embedding algorithm to partially influence vector representation distributions across their dimensions over an input vocabulary. To do this, we propose the following modification to the GloVe objective in ( EQREF6 ): rCl J = i,j=1V f(Xij)[ (wiTwj + bi + bj -Xij)2
+ k(l=1D INLINEFORM0 iFl g(wi,l) + l=1D INLINEFORM1 j Fl g(wj,l) ) ]. In ( SECREF4 ), INLINEFORM2 denotes the indices for the elements of the INLINEFORM3 th concept word-group which we wish to assign in the vector dimension INLINEFORM4 . The objective ( SECREF4 ) is designed as a mixture of two individual cost terms: the original GloVe cost term along with a second term that encourages embedding vectors of a given concept word-group to achieve deliberately increased values along an associated dimension INLINEFORM5 . The relative weight of the second term is controlled by the parameter INLINEFORM6 . The simultaneous minimization of both objectives ensures that words that are similar to, but not included in, one of these concept word-groups are also “nudged” towards the associated dimension INLINEFORM7 . The trained word vectors are thus encouraged to form a distribution where the individual vector dimensions align with certain semantic concepts represented by a collection of concept word-groups, one assigned to each vector dimension. To facilitate this behaviour, ( SECREF4 ) introduces a monotone decreasing function INLINEFORM8 defined as INLINEFORM9
which serves to increase the total cost incurred if the value of the INLINEFORM0 th dimension for the two vector representations INLINEFORM1 and INLINEFORM2 for a concept word INLINEFORM3 with INLINEFORM4 fails to be large enough. INLINEFORM5 is also shown in Fig. FIGREF7 .
The objective ( SECREF4 ) is minimized using stochastic gradient descent over the cooccurrence records INLINEFORM0 . Intuitively, the terms added to ( SECREF4 ) in comparison with ( EQREF6 ) introduce the effect of selectively applying a positive step-type input to the original descent updates of ( EQREF6 ) for concept words along their respective vector dimensions, which influences the dimension value in the positive direction. The parameter INLINEFORM1 in ( SECREF4 ) allows for the adjustment of the magnitude of this influence as needed.
In the next section, we demonstrate the feasibility of this approach by experiments with an example collection of concept word-groups extracted from Roget's Thesaurus.
Experiments and Results
We first identified 300 concepts, one for each dimension of the 300-dimensional vector representation, by employing Roget's Thesaurus. This thesaurus follows a tree structure which starts with a Root node that contains all the words and phrases in the thesaurus. The root node is successively split into Classes and Sections, which are then (optionally) split into Subsections of various depths, finally ending in Categories, which constitute the smallest unit of word/phrase collections in the structure. The actual words and phrases descend from these Categories, and make up the leaves of the tree structure. We note that a given word typically appears in multiple categories corresponding to the different senses of the word. We constructed concept word-groups from Roget's Thesaurus as follows: We first filtered out the multi-word phrases and the relatively obscure terms from the thesaurus. The obscure terms were identified by checking them against a vocabulary extracted from Wikipedia. We then obtained 300 word-groups as the result of a partitioning operation applied to the subtree that ends with categories as its leaves. The partition boundaries, hence the resulting word-groups, can be chosen in many different ways. In our proposed approach, we have chosen to determine this partitioning by traversing this tree structure from the root node in breadth-first order, and by employing a parameter INLINEFORM0 for the maximum size of a node. Here, the size of a node is defined as the number of unique words that ever-descend from that node. During the traversal, if the size of a given node is less than this threshold, we designate the words that ultimately descend from that node as a concept word-group. Otherwise, if the node has children, we discard the node, and queue up all its children for further consideration. If this node does not have any children, on the other hand, the node is truncated to INLINEFORM1 elements with the highest frequency-ranks, and the resulting words are designated as a concept word-group. We note that the choice of INLINEFORM2 greatly affects the resulting collection of word-groups: Excessively large values result in few word-groups that greatly overlap with one another, while overly small values result in numerous tiny word-groups that fail to adequately represent a concept. We experimentally determined that a INLINEFORM3 value of 452 results in the most healthy number of relatively large word-groups (113 groups with size INLINEFORM4 100), while yielding a preferably small overlap amongst the resulting word-groups (with average overlap size not exceeding 3 words). A total of 566 word-groups were thus obtained. 259 smallest word-groups (with size INLINEFORM5 38) were discarded to bring down the number of word-groups to 307. Out of these, 7 groups with the lowest median frequency-rank were further discarded, which yields the final 300 concept word-groups used in the experiments. We present some of the resulting word-groups in Table TABREF9 .
By using the concept word-groups, we have trained the GloVe algorithm with the proposed modification given in Section SECREF4 on a snapshot of English Wikipedia measuring 8GB in size, with the stop-words filtered out. Using the parameters given in Table TABREF10 , this resulted in a vocabulary size of 287,847. For the weighting parameter in Eq. SECREF4 , we used a value of INLINEFORM0 . The algorithm was trained over 20 iterations. The GloVe algorithm without any modifications was also trained as a baseline with the same parameters. In addition to the original GloVe algorithm, we compare our proposed method with previous studies that aim to obtain interpretable word vectors. We train the improved projected gradient model proposed in BIBREF20 to obtain word vectors (called OIWE-IPG) using the same corpus we use to train GloVe and our proposed method. Using the methods proposed in BIBREF23 , BIBREF26 , BIBREF24 on our baseline GloVe embeddings, we obtain SOV, SPINE and Parsimax (orthogonal) word representations, respectively. We train all the models with the proposed parameters. However, in BIBREF26 , the authors show results for a relatively small vocabulary of 15,000 words. When we trained their model on our baseline GloVe embeddings with a large vocabulary of size 287,847, the resulting vectors performed significantly poor on word similarity tasks compared to the results presented in their paper. In addition, Parsimax (orthogonal) word vectors obtained using method in BIBREF26 are nearly identical to the baseline vectors (i.e. learned orthogonal transformation matrix is very close to identity). Therefore, Parsimax (orthogonal) yields almost same results with baseline vectors in all evaluations. We evaluate the interpretability of the resulting embeddings qualitatively and quantitatively. We also test the performance of the embeddings on word similarity and word analogy tests.
In our experiments, vocabulary size is close to 300,000 while only 16,242 unique words of the vocabulary are present in the concept groups. Furthermore, only dimensions that correspond to the concept group of the word will be updated due to the additional cost term. Given that these concept words can belong to multiple concept groups (2 on average), only 33,319 parameters are updated. There are 90 million individual parameters present for the 300,000 word vectors of size 300. Of these parameters, only approximately 33,000 are updated by the additional cost term.
Qualitative Evaluation for Interpretability
In Fig. FIGREF13 , we demonstrate the particular way in which the proposed algorithm ( SECREF4 ) influences the vector representation distributions. Specifically, we consider, for illustration, the 32nd dimension values for the original GloVe algorithm and our modified version, restricting the plots to the top-1000 words with respect to their frequency ranks for clarity of presentation. In Fig. FIGREF13 , the words in the horizontal axis are sorted in descending order with respect to the values at the 32nd dimension of their word embedding vectors coming from the original GloVe algorithm. The dimension values are denoted with blue and red/green markers for the original and the proposed algorithms, respectively. Additionally, the top-50 words that achieve the greatest 32nd dimension values among the considered 1000 words are emphasized with enlarged markers, along with text annotations. In the presented simulation of the proposed algorithm, the 32nd dimension values are encoded with the concept JUDGMENT, which is reflected as an increase in the dimension values for words such as committee, academy, and article. We note that these words (red) are not part of the pre-determined word-group for the concept JUDGMENT, in contrast to words such as award, review and account (green) which are. This implies that the increase in the corresponding dimension values seen for these words is attributable to the joint effect of the first term in ( SECREF4 ) which is inherited from the original GloVe algorithm, in conjunction with the remaining terms in the proposed objective expression ( SECREF4 ). This experiment illustrates that the proposed algorithm is able to impart the concept of JUDGMENT on its designated vector dimension above and beyond the supplied list of words belonging to the concept word-group for that dimension. We also present the list of words with the greatest dimension value for the dimensions 11, 13, 16, 31, 36, 39, 41, 43 and 79 in Table TABREF11 . These dimensions are aligned/imparted with the concepts that are given in the column headers. In Table TABREF11 , the words that are highlighted with green denote the words that exist in the corresponding word-group obtained from Roget's Thesaurus (and are thus explicitly forced to achieve increased dimension values), while the red words denote the words that achieve increased dimension values by virtue of their cooccurrence statistics with the thesaurus-based words (indirectly, without being explicitly forced). This again illustrates that a semantic concept can indeed be coded to a vector dimension provided that a sensible lexical resource is used to guide semantically related words to the desired vector dimension via the proposed objective function in ( SECREF4 ). Even the words that do not appear in, but are semantically related to, the word-groups that we formed using Roget's Thesaurus, are indirectly affected by the proposed algorithm. They also reflect the associated concepts at their respective dimensions even though the objective functions for their particular vectors are not modified. This point cannot be overemphasized. Although the word-groups extracted from Roget's Thesaurus impose a degree of supervision to the process, the fact that the remaining words in the entire vocabulary are also indirectly affected makes the proposed method a semi-supervised approach that can handle words that are not in these chosen word-groups. A qualitative example of this result can be seen in the last column of Table TABREF11 . It is interesting to note the appearance of words such as guerilla, insurgency, mujahideen, Wehrmacht and Luftwaffe in addition to the more obvious and straightforward army, soldiers and troops, all of which are not present in the associated word-group WARFARE.
Most of the dimensions we investigated exhibit similar behaviour to the ones presented in Table TABREF11 . Thus generally speaking, we can say that the entries in Table TABREF11 are representative of the great majority. However, we have also specifically looked for dimensions that make less sense and determined a few such dimensions which are relatively less satisfactory. These less satisfactory examples are given in Table TABREF14 . These examples are also interesting in that they shed insight into the limitations posed by polysemy and existence of very rare outlier words.
Quantitative Evaluation for Interpretability
One of the main goals of this study is to improve the interpretability of dense word embeddings by aligning the dimensions with predefined concepts from a suitable lexicon. A quantitative measure is required to reliably evaluate the achieved improvement. One of the methods proposed to measure the interpretability is the word intrusion test BIBREF41 . But, this method is expensive to apply since it requires evaluations from multiple human evaluators for each embedding dimension. In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability. Specifically, we apply a modified version of the approach presented in BIBREF40 in order to consider possible sub-groupings within the categories in SEMCAT. Interpretability scores are calculated using Interpretability Score (IS) as given below:
DISPLAYFORM0
In ( EQREF17 ), INLINEFORM0 and INLINEFORM1 represents the interpretability scores in the positive and negative directions of the INLINEFORM2 dimension ( INLINEFORM3 , INLINEFORM4 number of dimensions in the embedding space) of word embedding space for the INLINEFORM5 category ( INLINEFORM6 , INLINEFORM7 is number of categories in SEMCAT, INLINEFORM8 ) in SEMCAT respectively. INLINEFORM9 is the set of words in the INLINEFORM10 category in SEMCAT and INLINEFORM11 is the number of words in INLINEFORM12 . INLINEFORM13 corresponds to the minimum number of words required to construct a semantic category (i.e. represent a concept). INLINEFORM14 represents the set of INLINEFORM15 words that have the highest ( INLINEFORM16 ) and lowest ( INLINEFORM17 ) values in INLINEFORM18 dimension of the embedding space. INLINEFORM19 is the intersection operator and INLINEFORM20 is the cardinality operator (number of elements) for the intersecting set. In ( EQREF17 ), INLINEFORM21 gives the interpretability score for the INLINEFORM22 dimension and INLINEFORM23 gives the average interpretability score of the embedding space.
Fig. FIGREF18 presents the measured average interpretability scores across dimensions for original GloVe embeddings, for the proposed method and for the other four methods we compare, along with a randomly generated embedding. Results are calculated for the parameters INLINEFORM0 and INLINEFORM1 . Our proposed method significantly improves the interpretability for all INLINEFORM2 compared to the original GloVe approach. Our proposed method is second to only SPINE in increasing interpretability. However, as we will experimentally demonstrate in the next subsection, in doing this, SPINE almost entirely destroys the underlying semantic structure of the word embeddings, which is the primary function of a word embedding.
The proposed method and interpretability measurements are both based on utilizing concepts represented by word-groups. Therefore it is expected that there will be higher interpretability scores for some of the dimensions for which the imparted concepts are also contained in SEMCAT. However, by design, word groups that they use are formed by using different sources and are independent. Interpretability measurements use SEMCAT while our proposed method utilizes Roget's Thesaurus.
Intrinsic Evaluation of the Embeddings
It is necessary to show that the semantic structure of the original embedding has not been damaged or distorted as a result of aligning the dimensions with given concepts, and that there is no substantial sacrifice involved from the performance that can be obtained with the original GloVe. To check this, we evaluate performances of the proposed embeddings on word similarity BIBREF42 and word analogy BIBREF0 tests. We compare the results with the original embeddings and the three alternatives excluding Parsimax BIBREF26 since orthogonal transformations will not affect the performance of the original embeddings on these tests.
Word similarity test measures the correlation between word similarity scores obtained from human evaluation (i.e. true similarities) and from word embeddings (usually using cosine similarity). In other words, this test quantifies how well the embedding space reflects human judgements in terms of similarities between different words. The correlation scores for 13 different similarity test sets are reported in Table TABREF20 . We observe that, let alone a reduction in performance, the obtained scores indicate an almost uniform improvement in the correlation values for the proposed algorithm, outperforming all the alternatives in almost all test sets. Categories from Roget's thesaurus are groupings of words that are similar in some sense which the original embedding algorithm may fail to capture. These test results signify that the semantic information injected into the algorithm by the additional cost term is significant enough to result in a measurable improvement. It should also be noted that scores obtained by SPINE is unacceptably low on almost all tests indicating that it has achieved its interpretability performance at the cost of losing its semantic functions.
Word analogy test is introduced in BIBREF1 and looks for the answers of the questions that are in the form of "X is to Y, what Z is to ?" by applying simple arithmetic operations to vectors of words X, Y and Z. We present precision scores for the word analogy tests in Table TABREF21 . It can be seen that the alternative approaches that aim to improve interpretability, have poor performance on the word analogy tests. However, our proposed method has comparable performance with the original GloVe embeddings. Our method outperforms GloVe in semantic analogy test set and in overall results, while GloVe performs slightly better in syntactic test set. This comparable performance is mainly due to the cost function of our proposed method that includes the original objective of the GloVe.
To investigate the effect of the additional cost term on the performance improvement in the semantic analogy test, we present Table TABREF22 . In particular, we present results for the cases where i) all questions in the dataset are considered, ii) only the questions that contains at least one concept word are considered, iii) only the questions that consist entirely of concept words are considered. We note specifically that for the last case, only a subset of the questions under the semantic category family.txt ended up being included. We observe that for all three scenarios, our proposed algorithm results in an improvement in the precision scores. However, the greatest performance increase is seen for the last scenario, which underscores the extent to which the semantic features captured by embeddings can be improved with a reasonable selection of the lexical resource from which the concept word-groups were derived.
Conclusion
We presented a novel approach to impart interpretability into word embeddings. We achieved this by encouraging different dimensions of the vector representation to align with predefined concepts, through the addition of an additional cost term in the optimization objective of the GloVe algorithm that favors a selective increase for a pre-specified input of concept words along each dimension.
We demonstrated the efficacy of this approach by applying qualitative and quantitative evaluations for interpretability. We also showed via standard word-analogy and word-similarity tests that the semantic coherence of the original vector space is preserved, even slightly improved. We have also performed and reported quantitative comparisons with several other methods for both interpretabilty increase and preservation of semantic coherence. Upon inspection of Fig. FIGREF18 and Tables TABREF20 , TABREF21 , and TABREF22 altogether, it should be noted that our proposed method achieves both of the objectives simultaneously, increased interpretability and preservation of the intrinsic semantic structure.
An important point was that, while it is expected for words that are already included in the concept word-groups to be aligned together since their dimensions are directly updated with the proposed cost term, it was also observed that words not in these groups also aligned in a meaningful manner without any direct modification to their cost function. This indicates that the cost term we added works productively with the original cost function of GloVe to handle words that are not included in the original concept word-groups, but are semantically related to those word-groups. The underlying mechanism can be explained as follows. While the outside lexical resource we introduce contains a relatively small number of words compared to the total number of words, these words and the categories they represent have been carefully chosen and in a sense, "densely span" all the words in the language. By saying "span", we mean they cover most of the concepts and ideas in the language without leaving too many uncovered areas. With "densely" we mean all areas are covered with sufficient strength. In other words, this subset of words is able to constitute a sufficiently strong skeleton, or scaffold. Now remember that GloVe works to align or bring closer related groups of words, which will include words from the lexical source. So the joint action of aligning the words with the predefined categories (introduced by us) and aligning related words (handled by GloVe) allows words not in the lexical groups to also be aligned meaningfully. We may say that the non-included words are "pulled along" with the included words by virtue of the "strings" or "glue" that is provided by GloVe. In numbers, the desired effect is achieved by manipulating less than only 0.05% of parameters of the entire word vectors. Thus, while there is a degree of supervision coming from the external lexical resource, the rest of the vocabulary is also aligned indirectly in an unsupervised way. This may be the reason why, unlike earlier proposed approaches, our method is able to achieve increasing interpretability without destroying underlying semantic structure, and consequently without sacrificing performance in benchmark tests.
Upon inspecting the 2nd column of Table TABREF14 , where qualitative results for concept TASTE are presented, another insight regarding the learning mechanism of our proposed approach can be made. Here it seems understandable that our proposed approach, along with GloVe, brought together the words taste and polish, and then the words Polish and, for instance, Warsaw are brought together by GloVe. These examples are interesting in that they shed insight into how GloVe works and the limitations posed by polysemy. It should be underlined that the present approach is not totally incapable of handling polysemy, but cannot do so perfectly. Since related words are being clustered, sufficiently well-connected words that do not meaningfully belong along with others will be appropriately "pulled away" from that group by several words, against the less effective, inappropriate pull of a particular word. Even though polish with lowercase "p" belongs where it is, it is attracting Warsaw to itself through polysemy and this is not meaningful. Perhaps because Warsaw is not a sufficiently well-connected word, it ends being dragged along, although words with greater connectedness to a concept group might have better resisted such inappropriate attractions.
In this study, we used the GloVe algorithm as the underlying dense word embedding scheme to demonstrate our approach. However, we stress that it is possible for our approach to be extended to other word embedding algorithms which have a learning routine consisting of iterations over cooccurrence records, by making suitable adjustments in the objective function. Since word2vec model is also based on the coocurrences of words in a sliding window through a large corpus, we expect that our approach can also be applied to word2vec after making suitable adjustments, which can be considered as an immediate future work for our approach. Although the semantic concepts are encoded in only one direction (positive) within the embedding dimensions, it might be beneficial to pursue future work that also encodes opposite concepts, such as good and bad, in two opposite directions of the same dimension.
The proposed methodology can also be helpful in computational cross-lingual studies, where the similarities are explored across the vector spaces of different languages BIBREF43 , BIBREF44 .
|
Do they report results only on English data?
|
Yes
| 6,169
|
qasper
|
8k
|
Introduction
The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. These approaches are known as spectral. Their similarity with message passing (MP) was observed by BIBREF9 and formalized by BIBREF13 and BIBREF14.
The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework.
GNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application. More precisely, we represent documents as word co-occurrence networks, and develop an expressive MP GNN tailored to document understanding, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Evaluation on 10 document classification datasets show that our architectures learn representations that are competitive with the state-of-the-art. Furthermore, ablation experiments shed light on the impact of various architectural choices.
In what follows, we first provide some background about the MP framework (in sec. SECREF2), thoroughly describe and explain MPAD (sec. SECREF3), present our experimental framework (sec. SECREF4), report and interpret our results (sec. SECREF5), and provide a review of the relevant literature (sec. SECREF6).
Message Passing Neural Networks
BIBREF13 proposed a MP framework under which many of the recently introduced GNNs can be reformulated. MP consists in an aggregation phase followed by a combination phase BIBREF14. More precisely, let $G(V,E)$ be a graph, and let us consider $v \in V$. At time $t+1$, a message vector $\mathbf {m}_v^{t+1}$ is computed from the representations of the neighbors $\mathcal {N}(v)$ of $v$:
The new representation $\mathbf {h}^{t+1}_v$ of $v$ is then computed by combining its current feature vector $\mathbf {h}^{t}_v$ with the message vector $\mathbf {m}_v^{t+1}$:
Messages are passed for $T$ time steps. Each step is implemented by a different layer of the MP network. Hence, iterations correspond to network depth. The final feature vector $\mathbf {h}_v^T$ of $v$ is based on messages propagated from all the nodes in the subtree of height $T$ rooted at $v$. It captures both the topology of the neighborhood of $v$ and the distribution of the vertex representations in it.
If a graph-level feature vector is needed, e.g., for classification or regression, a READOUT pooling function, that must be invariant to permutations, is applied:
Next, we present the MP network we developed for document understanding.
Message Passing Attention network for Document understanding (MPAD) ::: Word co-occurrence networks
We represent a document as a statistical word co-occurrence network BIBREF18, BIBREF19 with a sliding window of size 2 overspanning sentences. Let us denote that graph $G(V,E)$. Each unique word in the preprocessed document is represented by a node in $G$, and an edge is added between two nodes if they are found together in at least one instantiation of the window. $G$ is directed and weighted: edge directions and weights respectively capture text flow and co-occurrence counts.
$G$ is a compact representation of its document. In $G$, immediate neighbors are consecutive words in the same sentence. That is, paths of length 2 correspond to bigrams. Paths of length more than 2 can correspond either to traditional $n$-grams or to relaxed $n$-grams, that is, words that never appear in the same sentence but co-occur with the same word(s). Such nodes are linked through common neighbors.
Master node. Inspired by BIBREF3, our $G$ also includes a special document node, linked to all other nodes via unit weight bi-directional edges. In what follows, let us denote by $n$ the number of nodes in $G$, including the master node.
Message Passing Attention network for Document understanding (MPAD) ::: Message passing
We formulate our AGGREGATE function as:
where $\mathbf {H}^t \in \mathbb {R}^{n \times d}$ contains node features ($d$ is a hyperparameter), and $\mathbf {A} \in \mathbb {R}^{n \times n}$ is the adjacency matrix of $G$. Since $G$ is directed, $\mathbf {A}$ is asymmetric. Also, $\mathbf {A}$ has zero diagonal as we choose not to consider the feature of the node itself, only that of its incoming neighbors, when updating its representation. Since $G$ is weighted, the $i^{th}$ row of $A$ contains the weights of the edges incoming on node $v_i$. $\mathbf {D} \in \mathbb {R}^{n \times n}$ is the diagonal in-degree matrix of $G$. MLP denotes a multi-layer perceptron, and $\mathbf {M}^{t+1} \in \mathbb {R}^{n \times d}$ is the message matrix.
The use of a MLP was motivated by the observation that for graph classification, MP neural nets with 1-layer perceptrons are inferior to their MLP counterparts BIBREF14. Indeed, 1-layer perceptrons are not universal approximators of multiset functions. Note that like in BIBREF14, we use a different MLP at each layer.
Renormalization. The rows of $\mathbf {D}^{-1}\mathbf {A}$ sum to 1. This is equivalent to the renormalization trick of BIBREF9, but using only the in-degrees. That is, instead of computing a weighted sum of the incoming neighbors' feature vectors, we compute a weighted average of them. The coefficients are proportional to the strength of co-occurrence between words. One should note that by averaging, we lose the ability to distinguish between different neighborhood structures in some special cases, that is, we lose injectivity. Such cases include neighborhoods in which all nodes have the same representations, and neighborhoods of different sizes containing various representations in equal proportions BIBREF14. As suggested by the results of an ablation experiment, averaging is better than summing in our application (see subsection SECREF30). Note that instead of simply summing/averaging, we also tried using GAT-like attention BIBREF11 in early experiments, without obtaining better results.
As far as our COMBINE function, we use the Gated Recurrent Unit BIBREF20, BIBREF21:
Omitting biases for readability, we have:
where the $\mathbf {W}$ and $\mathbf {U}$ matrices are trainable weight matrices not shared across time steps, $\sigma (\mathbf {x}) = 1/(1+\exp (-\mathbf {x}))$ is the sigmoid function, and $\mathbf {R}$ and $\mathbf {Z}$ are the parameters of the reset and update gates. The reset gate controls the amount of information from the previous time step (in $\mathbf {H}^t$) that should propagate to the candidate representations, $\tilde{\mathbf {H}}^{t+1}$. The new representations $\mathbf {H}^{t+1}$ are finally obtained by linearly interpolating between the previous and the candidate ones, using the coefficients returned by the update gate.
Interpretation. Updating node representations through a GRU should in principle allow nodes to encode a combination of local and global signals (low and high values of $t$, resp.), by allowing them to remember about past iterations. In addition, we also explicitly consider node representations at all iterations when reading out (see Eq. DISPLAY_FORM18).
Message Passing Attention network for Document understanding (MPAD) ::: Readout
After passing messages and performing updates for $T$ iterations, we obtain a matrix $\mathbf {H}^T \in \mathbb {R}^{n \times d}$ containing the final vertex representations. Let $\hat{G}$ be graph $G$ without the special document node, and matrix $\mathbf {\hat{H}}^T \in \mathbb {R}^{(n-1) \times d}$ be the corresponding representation matrix (i.e., $\mathbf {H}^T$ without the row of the document node).
We use as our READOUT function the concatenation of self-attention applied to $\mathbf {\hat{H}}^T$ with the final document node representation. More precisely, we apply a global self-attention mechanism BIBREF22 to the rows of $\mathbf {\hat{H}}^T$. As shown in Eq. DISPLAY_FORM17, $\mathbf {\hat{H}}^T$ is first passed to a dense layer parameterized by matrix $\mathbf {W}_A^T \in \mathbb {R}^{d \times d}$. An alignment vector $\mathbf {a}$ is then derived by comparing, via dot products, the rows of the output of the dense layer $\mathbf {Y}^T \in \mathbb {R}^{(n-1) \times d}$ with a trainable vector $\mathbf {v}^T \in \mathbb {R}^d$ (initialized randomly) and normalizing with a softmax. The normalized alignment coefficients are finally used to compute the attentional vector $\mathbf {u}^T \in \mathbb {R}^d$ as a weighted sum of the final representations $\mathbf {\hat{H}}^T$.
Note that we tried with multiple context vectors, i.e., with a matrix $\mathbf {V}^T$ instead of a vector $\mathbf {v}^T$, like in BIBREF22, but results were not convincing, even when adding a regularization term to the loss to favor diversity among the rows of $\mathbf {V}^T$.
Master node skip connection. $\mathbf {h}_G^T \in \mathbb {R}^{2d}$ is obtained by concatenating $\mathbf {u}^T$ and the final master node representation. That is, the master node vector bypasses the attention mechanism. This is equivalent to a skip or shortcut connection BIBREF23. The reason behind this choice is that we expect the special document node to learn a high-level summary about the document, such as its size, vocabulary, etc. (more details are given in subsection SECREF30). Therefore, by making the master node bypass the attention layer, we directly inject global information about the document into its final representation.
Multi-readout. BIBREF14, inspired by Jumping Knowledge Networks BIBREF12, recommend to not only use the final representations when performing readout, but also that of the earlier steps. Indeed, as one iterates, node features capture more and more global information. However, retaining more local, intermediary information might be useful too. Thus, instead of applying the readout function only to $t=T$, we apply it to all time steps and concatenate the results, finally obtaining $\mathbf {h}_G \in \mathbb {R}^{T \times 2d}$ :
In effect, with this modification, we take into account features based on information aggregated from subtrees of different heights (from 1 to $T$), corresponding to local and global features.
Message Passing Attention network for Document understanding (MPAD) ::: Hierarchical variants of MPAD
Through the successive MP iterations, it could be argued that MPAD implicitly captures some soft notion of the hierarchical structure of documents (words $\rightarrow $ bigrams $\rightarrow $ compositions of bigrams, etc.). However, it might be beneficial to explicitly capture document hierarchy. Hierarchical architectures have brought significant improvements to many NLP tasks, such as language modeling and generation BIBREF24, BIBREF25, sentiment and topic classification BIBREF26, BIBREF27, and spoken language understanding BIBREF28, BIBREF29. Inspired by this line of research, we propose several hierarchical variants of MPAD, detailed in what follows. In all of them, we represent each sentence in the document as a word co-occurrence network, and obtain an embedding for it by applying MPAD as previously described.
MPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention.
MPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained.
MPAD-path. This variant is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text. That is, two nodes are linked by a directed edge if the two sentences they represent follow each other in the document.
Experiments ::: Datasets
We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21.
(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes).
(2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles.
(3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes.
(4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences).
(5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering.
(6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie.
(7) TREC BIBREF35 consists of questions that are classified into 6 different categories.
(8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive).
(9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative.
(10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge.
Experiments ::: Baselines
We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants.
doc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier.
CNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions).
DAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax.
Tree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees.
DRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees.
LSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations.
C-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM.
SPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents.
WMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used.
S-WMD BIBREF46 is a supervised extension of the Word Mover's Distance.
Semantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space.
LSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector.
HN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level.
Experiments ::: Model configuration and training
We preprocess all datasets using the code of BIBREF38. On Yelp2013, we also replace all tokens appearing strictly less than 6 times with a special UNK token, like in BIBREF27. We then build a directed word co-occurrence network from each document, with a window of size 2.
We use two MP iterations ($T$=2) for the basic MPAD, and two MP iterations at each level, for the hierarchical variants. We set $d$ to 64, except on IMDB and Yelp on which $d=128$, and use a two-layer MLP. The final graph representations are passed through a softmax for classification. We train MPAD in an end-to-end fashion by minimizing the cross-entropy loss function with the Adam optimizer BIBREF48 and an initial learning rate of 0.001.
To regulate potential differences in magnitude, we apply batch normalization after concatenating the feature vector of the master node with the self-attentional vector, that is, after the skip connection (see subsection SECREF16). To prevent overfitting, we use dropout BIBREF49 with a rate of 0.5. We select the best epoch, capped at 200, based on the validation accuracy. When cross-validation is used (see 3rd column of Table TABREF21), we construct a validation set by randomly sampling 10% of the training set of each fold.
On all datasets except Yelp2013, we use the publicly available 300-dimensional pre-trained Google News vectors ($D$=300) BIBREF50 to initialize the node representations $\mathbf {H}^0$. On Yelp2013, we follow BIBREF27 and learn our own word vectors from the training and validation sets with the gensim implementation of word2vec BIBREF51.
MPAD was implemented in Python 3.6 using the PyTorch library BIBREF52. All experiments were run on a single machine consisting of a 3.4 GHz Intel Core i7 CPU with 16 GB of RAM and an NVidia GeForce Titan Xp GPU.
Results and ablations ::: Results
Experimental results are shown in Table TABREF28. For the baselines, the best scores reported in each original paper are shown. MPAD reaches best performance on 7 out of 10 datasets, and is close second elsewhere. Moreover, the 7 datasets on which MPAD ranks first widely differ in training set size, number of categories, and prediction task (topic, sentiment, subjectivity), which indicates that MPAD can perform well in different settings.
MPAD vs. hierarchical variants. On 9 datasets out of 10, one or more of the hierarchical variants outperform the vanilla MPAD architecture, highlighting the benefit of explicitly modeling the hierarchical nature of documents.
However, on Subjectivity, standard MPAD outperforms all hierarchical variants. On TREC, it reaches the same accuracy. We hypothesize that in some cases, using a different graph to separately encode each sentence might be worse than using one single graph to directly encode the document. Indeed, in the single document graph, some words that never appear in the same sentence can be connected through common neighbors, as was explained in subsection SECREF7. So, this way, some notion of cross-sentence context is captured while learning representations of words, bigrams, etc. at each MP iteration. This creates better informed representations, resulting in a better document embedding. With the hierarchical variants, on the other hand, each sentence vector is produced in isolation, without any contextual information about the other sentences in the document. Therefore, the final sentence embeddings might be of lower quality, and as a group might also contain redundant/repeated information. When the sentence vectors are finally combined into a document representation, it is too late to take context into account.
Results and ablations ::: Ablation studies
To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.
Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.
Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation.
No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document.
No renormalization. Here, we do not use the renormalization trick of BIBREF9 during MP (see subsection SECREF10). That is, Eq. DISPLAY_FORM11 becomes $\mathbf {M}^{t+1} = \textsc {MLP}^{t+1}\big (\mathbf {A}\mathbf {H}^{t}\big )$. In other words, instead of computing a weighted average of the incoming neighbors' feature vectors, we compute a weighted sum of them. Unlike the mean, which captures distributions, the sum captures structural information BIBREF14. As shown in Table TABREF29, using sum instead of mean decreases performance everywhere, suggesting that in our application, capturing the distribution of neighbor representations is more important that capturing their structure. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents.
Neighbors-only. In this experiment, we replaced the GRU combine function (see Eq. DISPLAY_FORM14) with the identity function. That is, we simply have $\mathbf {H}^{t+1}$=$\mathbf {M}^{t+1}$. Since $\mathbf {A}$ has zero diagonal, by doing so, we completely ignore the previous feature of the node itself when updating its representation. That is, the update is based entirely on its neighbors. Except on Reuters (almost no change), performance always suffers, stressing the need to take into account the root node during updates, not only its neighborhood.
Related work
In what follows, we offer a brief review of relevant studies, ranked by increasing order of similarity with our work.
BIBREF9, BIBREF54, BIBREF11, BIBREF10 conduct some node classification experiments on citation networks, where nodes are scientific papers, i.e., textual data. However, text is only used to derive node feature vectors. The external graph structure, which plays a central role in determining node labels, is completely unrelated to text.
On the other hand, BIBREF55, BIBREF7 experiment on traditional document classification tasks. They both build $k$-nearest neighbor similarity graphs based on the Gaussian diffusion kernel. More precisely, BIBREF55 build one single graph where nodes are documents and distance is computed in the BoW space. Node features are then used for classification. Closer to our work, BIBREF7 represent each document as a graph. All document graphs are derived from the same underlying structure. Only node features, corresponding to the entries of the documents' BoW vectors, vary. The underlying, shared structure is that of a $k$-NN graph where nodes are vocabulary terms and similarity is the cosine of the word embedding vectors. BIBREF7 then perform graph classification. However they found performance to be lower than that of a naive Bayes classifier.
BIBREF56 use a GNN for hierarchical classification into a large taxonomy of topics. This task differs from traditional document classification. The authors represent documents as unweighted, undirected word co-occurrence networks with word embeddings as node features. They then use the spatial GNN of BIBREF15 to perform graph classification.
The work closest to ours is probably that of BIBREF53. The authors adopt the semi-supervised node classification approach of BIBREF9. They build one single undirected graph from the entire dataset, with both word and document nodes. Document-word edges are weighted by TF-IDF and word-word edges are weighted by pointwise mutual information derived from co-occurrence within a sliding window. There are no document-document edges. The GNN is trained based on the cross-entropy loss computed only for the labeled nodes, that is, the documents in the training set. When the final node representations are obtained, one can use that of the test documents to classify them and evaluate prediction performance.
There are significant differences between BIBREF53 and our work. First, our approach is inductive, not transductive. Indeed, while the node classification approach of BIBREF53 requires all test documents at training time, our graph classification model is able to perform inference on new, never-seen documents. The downside of representing documents as separate graphs, however, is that we lose the ability to capture corpus-level dependencies. Also, our directed graphs capture word ordering, which is ignored by BIBREF53. Finally, the approach of BIBREF53 requires computing the PMI for every word pair in the vocabulary, which may be prohibitive on datasets with very large vocabularies. On the other hand, the complexity of MPAD does not depend on vocabulary size.
Conclusion
We have proposed an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). Experiments conducted on 10 standard text classification datasets show that our architecture is competitive with the state-of-the-art. By processing weighted, directed word co-occurrence networks, MPAD is sensitive to word order and word-word relationship strength. To explicitly capture the hierarchical structure of documents, we also propose three hierarchical variants of MPAD, that we show bring improvements over the vanilla architecture.
Acknowledgments
We thank the NVidia corporation for the donation of a GPU as part of their GPU grant program.
|
Which component is the least impactful?
|
Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets.
| 4,180
|
qasper
|
8k
|
Introduction
Attempts toward constructing human-like dialogue agents have met significant difficulties, such as maintaining conversation consistency BIBREF0. This is largely due to inabilities of dialogue agents to engage the user emotionally because of an inconsistent personality BIBREF1. Many agents use personality models that attempt to map personality attributes into lower dimensional spaces (e.g. the Big Five BIBREF2). However, these represent human personality at a very high-level and lack depth. They prohibit the ability to link specific and detailed personality traits to characters, and to construct large datasets where dialogue is traceable back to these traits.
For this reason, we propose Human Level Attributes (HLAs), which we define as characteristics of fictional characters representative of their profile and identity. We base HLAs on tropes collected from TV Tropes BIBREF3, which are determined by viewers' impressions of the characters. See Figure FIGREF1 for an example. Based on the hypothesis that profile and identity contribute effectively to language style BIBREF4, we propose that modeling conversation with HLAs is a means for constructing a dialogue agent with stable human-like characteristics. By collecting dialogues from a variety of characters along with this HLA information, we present a novel labelling of this dialogue data where it is traceable back to both its context and associated human-like qualities.
We also propose a system called ALOHA (Artificial Learning On Human Attributes) as a novel method of incorporating HLAs into dialogue agents. ALOHA maps characters to a latent space based on their HLAs, determines which are most similar in profile and identity, and recovers language styles of specific characters. We test the performance of ALOHA in character language style recovery against four baselines, demonstrating outperformance and system stability. We also run a human evaluation supporting our results. Our major contributions are: (1) We propose HLAs as personality aspects of fictional characters from the audience's perspective based on tropes; (2) We provide a large dialogue dataset traceable back to both its context and associated human-like attributes; (3) We propose a system called ALOHA that is able to recommend responses linked to specific characters. We demonstrate that ALOHA, combined with the proposed dataset, outperforms baselines. ALOHA also shows stable performance regardless of the character's identity, genre of the show, and context of the dialogue. We plan to release all of ALOHA's data and code.
Related Work
Task completion chatbots (TCC), or task-oriented chatbots, are dialogue agents used to fulfill specific purposes, such as helping customers book airline tickets, or a government inquiry system. Examples include the AIML based chatbot BIBREF5 and DIVA Framework BIBREF6. While TCC are low cost, easily configurable, and readily available, they are restricted to working well for particular domains and tasks.
Open-domain chatbots are more generic dialogue systems. An example is the Poly-encoder from BIBREF7 humeau2019real. It outperforms the Bi-encoder BIBREF8, BIBREF9 and matches the performance of the Cross-encoder BIBREF10, BIBREF11 while maintaining reasonable computation time. It performs strongly on downstream language understanding tasks involving pairwise comparisons, and demonstrates state-of-the-art results on the ConvAI2 challenge BIBREF12. Feed Yourself BIBREF13 is an open-domain dialogue agent with a self-feeding model. When the conversation goes well, the dialogue becomes part of the training data, and when the conversation does not, the agent asks for feedback. Lastly, Kvmemnn BIBREF14 is a key-value memory network with a knowledge base that uses a key-value retrieval mechanism to train over multiple domains simultaneously. We use all three of these models as baselines for comparison. While these can handle a greater variety of tasks, they do not respond with text that aligns with particular human-like characteristics.
BIBREF15 li2016persona defines persona (composite of elements of identity) as a possible solution at the word level, using backpropagation to align responses via word embeddings. BIBREF16 bartl2017retrieval uses sentence embeddings and a retrieval model to achieve higher accuracy on dialogue context. BIBREF17 liu2019emotion applies emotion states of sentences as encodings to select appropriate responses. BIBREF18 pichl2018alquist uses knowledge aggregation and hierarchy of sub-dialogues for high user engagement. However, these agents represent personality at a high-level and lack detailed human qualities. LIGHT BIBREF19 models adventure game characters' dialogues, actions, and emotions. It focuses on the agent identities (e.g. thief, king, servant) which includes limited information on realistic human behaviours. BIBREF20 pasunuru2018game models online soccer games as dynamic visual context. BIBREF21 wang2016learning models user dialogue to complete tasks involving certain configurations of blocks. BIBREF22 antol2015vqa models open-ended questions, but is limited to visual contexts. BIBREF23 bordes2016learning tracks user dialogues but is goal-oriented. BIBREF24 ilinykh2019meetup tracks players' dialogues and movements in a visual environment, and is grounded on navigation tasks. All of these perform well in their respective fictional environments, but are not a strong representation of human dialogue in reality.
Methodology ::: Human Level Attributes (HLA)
We collect HLA data from TV Tropes BIBREF3, a knowledge-based website dedicated to pop culture, containing information on a plethora of characters from a variety of sources. Similar to Wikipedia, its content is provided and edited collaboratively by a massive user-base. These attributes are determined by human viewers and their impressions of the characters, and are correlated with human-like characteristics. We believe that TV Tropes is better for our purpose of fictional character modeling than data sources used in works such as BIBREF25 shuster2019engaging because TV Tropes' content providers are rewarded for correctly providing content through community acknowledgement.
TV Tropes defines tropes as attributes of storytelling that the audience recognizes and understands. We use tropes as HLAs to calculate correlations with specific target characters. We collect data from numerous characters from a variety of TV shows, movies, and anime. We filter and keep characters with at least five HLA, as those with fewer are not complex enough to be correctly modeled due to reasons such as lack of data. We end up eliminating 5.86% of total characters, and end up with 45,821 characters and 12,815 unique HLA, resulting in 945,519 total character-HLA pairs. Each collected character has 20.64 HLAs on average. See Figure FIGREF1 for an example character and their HLAs.
Methodology ::: Overall Task
Our task is the following, where $t$ denotes “target":
Given a target character $c_t$ with HLA set $H_t$, recover the language style of $c_t$ without any dialogue of $c_t$ provided.
For example, if Sheldon Cooper from The Big Bang Theory is $c_t$, then $H_t$ is the set of HLA on the left side of Figure FIGREF1.
We define the language style of a character as its diction, tone, and speech patterns. It is a character specific language model refined from a general language model. We must learn to recover the language style of $c_t$ without its dialogue as our objective is to imitate human-like qualities, and hence the model must understand the language styles of characters based on their traits. If we feed $c_t$'s dialogue during training, the model will likely not effectively learn to imitate language styles based on HLAs, but based on the correlation between text in the training and testing dialogues BIBREF26.
We define character space as the character representations within the HLA latent space (see Figure FIGREF4), and the set $C = \lbrace c_1,c_2,...,c_n\rbrace $ as the set of all characters. We define Observation (OBS) as the input that is fed into any dialogue model. This can be a single or multiple lines of dialogue along with other information. The goal of the dialogue model is to find the best response to this OBS.
Methodology ::: ALOHA
We propose a three-component system called ALOHA to solve the task (see Figure FIGREF6). The first component, Character Space Module (CSM), generates the character space and calculates confidence levels using singular value decomposition BIBREF27 between characters $c_j$ (for $j = 1$ to $n$ where $j \ne t$) and $c_t$ in the HLA-oriented neighborhood.
The second component, Character Community Module (CCM), ranks the similarity between our target character $c_t$ with any other character $c_j$ by the relative distance between them in the character space.
The third component, Language Style Recovery Module (LSRM), recovers the language style of $c_t$ without its dialogue by training the BERT bi-ranker model BIBREF28 to rank responses from similar characters. Our results demonstrate higher accuracy at retrieving the ground truth response from $c_t$. Our system is also able to pick responses which are correct both in context as well as character space.
Hence, the overall process for ALOHA works as follows. First, given a set of characters, determine the character space using the CSM. Next, given a specific target character, determine the positive community and negative set of associated characters using the CCM. Lastly, using the positive community and negative set determined above along with a dialogue dataset, recover the language style of the target.
Methodology ::: Character Space Module (CSM)
CSM learns how to rank characters. We can measure the interdependencies between the HLA variables BIBREF29 and rank the similarity between the TV show characters. We use implicit feedback instead of neighborhood models (e.g. cosine similarity) because it can compute latent factors to transform both characters and HLAs into the same latent space, making them directly comparable.
We define a matrix $P$ that contains binary values, with $P_{u,i} = 1$ if character $u$ has HLA $i$ in our dataset, and $P_{u,i} = 0$ otherwise. We define a constant $\alpha $ that measures our confidence in observing various character-HLA pairs as positive. $\alpha $ controls how much the model penalizes the error if the ground truth is $P_{u,i} = 1$. If $P_{u,i} = 1$ and the model guesses incorrectly, we penalize by $\alpha $ times the loss. But if $P_{u,i} = 0$ and the model guesses a value greater than 0, we do not penalize as $\alpha $ has no impact. This is because $P_{u,i} = 0$ can either represent a true negative or be due to a lack of data, and hence is less reliable for penalization. See Equation DISPLAY_FORM8. We find that using $\alpha =20$ provides decent results.
We further define two dense vectors $X_u$ and $Y_i$. We call $X_u$ the “latent factors for character $u$", and $Y_i$ the “latent factors for HLA $i$". The dot product of these two vectors produces a value ($X_u^TY_i$) that approximates $P_{u,i}$ (see Figure FIGREF9). This is analogous to factoring the matrix $P$ into two separate matrices, where one contains the latent factors for characters, and the other contains the latent factors for HLAs. We find that $X_u$ and $Y_i$ being 36-dimensional produces decent results. To bring $X_u^TY_i$ as close as possible to $P_{u,i}$, we minimize the following loss function using the Conjugate Gradient Method BIBREF30:
The first term penalizes differences between the model's prediction ($X_u^TY_i$) and the actual value ($P_{u,i}$). The second term is an L2 regularizer to reduce overfitting. We find $\lambda = 100$ provides decent results for 500 iterations (see Section SECREF26).
Methodology ::: Character Community Module (CCM)
CCM aims to divide characters (other than $c_t$) into a positive community and a negative set. We define this positive community as characters that are densely connected internally to $c_t$ within the character space, and the negative set as the remaining characters. We can then sample dialogue from characters in the negative set to act as the distractors (essentially negative samples) during LSRM training.
As community finding is an ill-defined problem BIBREF31, we choose to treat CCM as a simple undirected, unweighted graph. We use the values learned in the CSM for $X_u$ and $Y_i$ for various values of $u$ and $i$, which approximate the matrix $P$. Similar to BIBREF29 hu2008collaborative, we can calculate the correlation between two rows (and hence two characters).
We then employ a two-level connection representation by ranking all characters against each other in terms of their correlation with $c_t$. For the first level, the set $S^{FL}$ is the top 10% (4582) most highly correlated characters with $c_t$ out of the 45,820 total other characters that we have HLA data for. For the second level, for each character $s_i$ in $S^{FL}$, we determine the 30 most heavily correlated characters with $s_i$ as set $S^{SL}_i$. The positive set $S^{pos}$ are the characters which are present in at least 10 $S^{SL}_i$ sets. We call this value 10 the minimum frequency. All other characters in our dialogue dataset make up the negative set $S^{neg}$. These act as our positive community and negative set, respectively. See Algorithm 1 in Appendix A for details, and Figure FIGREF11 for an example.
Methodology ::: Language Style Recovery Module (LSRM)
LSRM creates a dialogue agent that aligns with observed characteristics of human characters by using the positive character community and negative set determined in the CCM, along with a dialogue dataset, to recover the language style of $c_t$ without its dialogue. We use the BERT bi-ranker model from the Facebook ParlAI framework BIBREF32, where the model has the ability to retrieve the best response out of 20 candidate responses. BIBREF12, BIBREF19, BIBREF0 choose 20 candidate responses, and for comparison purposes, we do the same.
Methodology ::: Language Style Recovery Module (LSRM) ::: BERT
BIBREF28 is first trained on massive amounts of unlabeled text data. It jointly conditions on text on both the left and right, which provides a deep bi-directional representation of sentence inference. BERT is proven to perform well on a wide range of tasks by simply fine-tuning on one additional layer. We are interested in its ability to predict the next sentence, called Next Sentence Prediction. We perform further fine-tuning on BERT for our target character language style retrieval task to produce our LSRM model by optimizing both the encoding layers and the additional layer. We use BERT to create vector representations for the OBS and for each candidate response. By passing the first output of BERT's 12 layers through an additional linear layer, these representations can be obtained as 768-dimensional sentence-level embeddings. It uses the dot product between these embeddings to score candidate responses and is trained using the ranking loss.
Methodology ::: Language Style Recovery Module (LSRM) ::: Candidate response selection
is similar to the procedure from previous work done on grounded dialogue agents BIBREF0, BIBREF19. Along with the ground truth response, we randomly sample 19 distractor responses from other characters from a uniform distribution of characters, and call this process uniform character sampling. Based on our observations, this random sampling provides multiple context correct responses. Hence, the BERT bi-ranker model is trained by learning to choose context correct responses, and the model learns to recover a domain-general language model that includes training on every character. This results in a Uniform Model that can select context correct responses, but not responses corresponding to a target character with specific HLAs.
We then fine-tune on the above model to produce our LSRM model with a modification: we randomly sample the 19 distractor responses from only the negative character set instead. We choose the responses that have similar grammatical structures and semantics to the ground truth response, and call this process negative character sampling. This guides the model away from the language style of these negative characters to improve performance at retrieving responses for target characters with specific HLAs. Our results demonstrate higher accuracy at retrieving the correct response from character $c_t$, which is the ground truth.
Experiment ::: Dialogue Dataset
To train the Uniform Model and LSRM, we collect dialogues from 327 major characters (a subset of the 45,821 characters we have HLA data for) in 38 TV shows from various existing sources of clean data on the internet, resulting in a total of 1,042,647 dialogue lines. We use a setup similar to the Persona-Chat dataset BIBREF0 and Cornell Movie-Dialogs Corpus BIBREF33, as our collected dialogues are also paired in terms of valid conversations. See Figure FIGREF1 for an example of these dialogue lines.
Experiment ::: HLA Observation Guidance (HLA-OG)
We define HLA Observation Guidance (HLA-OG) as explicitly passing a small subset of the most important HLAs of a given character as part of the OBS rather than just an initial line of dialogue. This is adapted from the process used in BIBREF0 zhang2018personalizing and BIBREF10 wolf2019transfertransfo which we call Persona Profiling. Specifically, we pass four HLAs that are randomly drawn from the top 40 most important HLAs of the character. We use HLA-OG during training of the LSRM and testing of all models. This is because the baselines (see Section SECREF31) already follow a similar process (Persona Profiling) for training. For the Uniform Model, we train using Next Sentence Prediction (see Section SECREF12). For testing, HLA-OG is necessary as it provides information about which HLAs the models should attempt to imitate in their response selection. Just passing an initial line of dialogue replicates a typical dialogue response task without HLAs. See Table TABREF19. Further, we also test our LSRM by explicitly passing four HLAs of `none' along with the initial line of dialogue as the OBS (No HLA-OG in Table TABREF19).
Experiment ::: Training Details ::: BERT bi-ranker
is trained by us on the Persona-Chat dataset for the ConvAI2 challenge. Similar to BIBREF0 zhang2018personalizing, we cap the length of the OBS at 360 tokens and the length of each candidate response at 72 tokens. We use a batch size of 64, learning rate of 5e-5, and perform warm-up updates for 100 iterations. The learning rate scheduler uses SGD optimizer with Nesterov's accelerated gradient descent BIBREF34 and is set to have a decay of 0.4 and to reduce on plateau.
Experiment ::: Training Details ::: Uniform Model
is produced by finetuning the BERT bi-ranker on the dialogue data discussed in Section SECREF15 using uniform character sampling. We use the same hyperparameters as the BERT bi-ranker along with half-precision operations (i.e. float16 operations) to increase batch size as recommended BIBREF7.
Experiment ::: Training Details ::: LSRM
is produced by finetuning on the Uniform Model discussed above using negative character sampling. We use the same hyperparameters as the BERT bi-ranker along with half-precision operations (i.e. float16 operations) to increase batch size as recommended.
Evaluation ::: CSM Evaluation
We begin by evaluating the ability of the CSM component of our system to correctly generate the character space. To do so, during training, 30% of the character-HLA pairs (which are either 0 or 1) are masked, and this is used as a validation set (see Figure FIGREF9). For each character $c$, the model generates a list of the 12,815 unique HLAs ranked similarly to BIBREF29 hu2008collaborative for $c$. We look at the recall of our CSM model, which measures the percentage of total ground truth HLAs (over all characters $c$) present within the top N ranked HLAs for all $c$ by our model. That is:
where $HLA_{c}^{gt}$ are the ground truth HLAs for $c$, and $HLA_{c}^{tN}$ are the top N ranked HLAs by the model for $c$. We use $N = 100$, and our model achieves 25.08% recall.
To inspect the CSM performance, we use the T-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF35 to reduce each high-dimensionality data point to two-dimensions via Kullback-Leibler Divergence BIBREF36. This allows us to map our character space into two-dimensions, where similar characters from our embedding space have higher probability of being mapped close by. We sampled characters from four different groups or regions. As seen in Figure FIGREF4, our learned character space effectively groups these characters, as similar characters are adjacent to one another in four regions.
Evaluation ::: Automatic Evaluation Setup ::: Five-Fold Cross Validation
is used for training and testing of the Uniform Model and LSRM. The folds are divided randomly by the TV shows in our dialogue data. We use the dialogue data for 80% of these shows as the four-folds for training, and the dialogue data for the remaining 20% as the fifth-fold for validation/testing. The dialogue data used is discussed in Section SECREF15. This ensures no matter how our data is distributed, each part of it is tested, allowing our evaluation to be more robust to different characters. See Appendix C for five-fold cross validation details and statistics.
Evaluation ::: Automatic Evaluation Setup ::: Five Evaluation Characters
are chosen, one from each of the five testing sets above. Each is a well-known character from a separate TV show, and acts as a target character $c_t$ for evaluation of every model. We choose Sheldon Cooper from The Big Bang Theory, Jean-Luc Picard from Star Trek, Monica Geller from Friends, Gil Grissom from CSI, and Marge Simpson from The Simpsons. We choose characters of significantly different identities and profiles (intelligent scientist, ship captain, outgoing friend, police leader, and responsible mother, respectively) from shows of a variety of genres to ensure that we can successfully recover the language styles of various types of characters. We choose well-known characters because humans require knowledge on the characters they are evaluating (see Section SECREF40).
For each of these five evaluation characters, all the dialogue lines from the character act as the ground truth responses. The initial dialogue lines are the corresponding dialogue lines to which these ground truth responses are responding. For each initial dialogue line, we randomly sample 19 other candidate responses from the associated testing set using uniform character sampling. Note that this is for evaluation, and hence we use the same uniform character sampling method for all models including ALOHA. The use of negative character sampling is only in ALOHA's training.
Evaluation ::: Baselines
We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20. For the first three models, we use the provided pretrained (on Persona-Chat) models. We evaluate all four on our five evaluation characters discussed in Section SECREF28.
Evaluation ::: Key Evaluation Metrics ::: Hits@n/N
is the accuracy of the correct ground truth response being within the top $n$ ranked candidate responses out of $N$ total candidates. We measure Hits@1/20, Hits@5/20, and Hits@10/20.
Evaluation ::: Key Evaluation Metrics ::: Mean Rank
is the average rank that a model assigns the ground truth response among the 20 total candidates.
Evaluation ::: Key Evaluation Metrics ::: Mean Reciprocal Rank (MRR)
BIBREF37 looks at the mean of the multiplicative inverses of the rank of each correct answer out of a sample of queries $Q$:
where $rank_i$ refers to the rank position of the correct response for the $i$-th query, and $|Q|$ refers to the total number of queries in $Q$.
Evaluation ::: Key Evaluation Metrics ::: @!START@$F_1$@!END@-score
equals $2 * \frac{precision*recall}{precision+recall}$. For dialogue, precision is the fraction of words in the chosen response contained in the ground truth response, and recall is the fraction of words in the ground truth response contained in the chosen response.
Evaluation ::: Key Evaluation Metrics ::: BLEU
BIBREF38 generally indicates how close two pieces of text are in content and structure, with higher values indicating greater similarity. We report our final BLEU scores as the average scores of 1 to 4-grams.
Evaluation ::: Human Evaluation Setup
We conduct a human evaluation with 12 participants, 8 male and 4 female, who are affiliated project researchers aged 20-39 at the University of [ANON]. We choose the same five evaluation characters as in Section SECREF28. To control bias, each participant evaluates one or two characters. For each character, we randomly select 10 testing samples (each includes an initial line of dialogue along with 20 candidate responses, one of which is the ground truth) from the same testing data for the automatic evaluation discussed in Section SECREF28.
These ten samples make up a single questionnaire presented in full to each participant evaluating the corresponding character, and the participant is asked to select the single top response they think the character would most likely respond with for each of the ten initial dialogue lines. See Figure FIGREF41 for an example. We mask any character names within the candidate responses to prevent human participants from using names to identify which show the response is from.
Each candidate is prescreened to ensure they have sufficient knowledge of the character to be a participant. We ask three prescreening questions where the participant has to identify an image, relationship, and occupation of the character. All 12 of our participants passed the the prescreening.
Results and Analysis ::: Evaluation Results
Table TABREF44 shows average results of our automatic and human evaluations. Table TABREF45 shows average Hits@1/20 scores by evaluation character. See Appendix F for detailed evaluation results. ALOHA is the model with HLA-OG during training and testing, and ALOHA (No HLA-OG) is the model with HLA-OG during training but tested with the four HLAs in the OBS marked as `none' (see Section SECREF17). See Appendix G for demo interactions between a human, BERT bi-ranker baseline, and ALOHA for all five evaluation characters.
Results and Analysis ::: Evaluation Challenges
The evaluation of our task (retrieving the language style of a specific character) is challenging and hence the five-fold cross validation is necessary for the following reasons:
The ability to choose a context correct response without attributes of specific characters may be hard to separate from our target metric, which is the ability to retrieve the correct response of a target character by its HLAs. However, from manual observation, we noticed that in the 20 chosen candidate responses, there are typically numerous context correct responses, but only one ground truth for the target character (for an example, see Figure FIGREF41). Hence, a model that only chooses dialogue based on context is distinguishable from one that learns HLAs.
Retrieving responses for the target character depends on the other candidate responses. For example, dialogue retrieval performance for Grissom from CSI, which is a crime/police context, is higher than other evaluation characters (see Table TABREF45), potentially due to other candidate responses not falling within the same crime/police context.
Results and Analysis ::: Performance: ALOHA vs. Humans
As observed from Table TABREF44, ALOHA has a performance relatively close to humans. Human Hits@1/20 scores have a mean of 40.67% and a median over characters of 40%. The limited human evaluation sample size limits what can be inferred, but it indicates that the problem is solved to the extent that ALOHA is able perform relatively close to humans on average. Notice that even humans do not perform extremely well, demonstrating that this task of character based dialogue retrieval is more difficult than typical dialogue retrieval tasks BIBREF19, BIBREF12.
Looking more closely at each character from Table TABREF45, we can see that human evaluation scores are higher for Sheldon and Grissom. This may be due to these characters having more distinct personalities, making them more memorable.
We also look at Pearson correlation values of the Hits@1/20 scores across the five evaluation characters. For human versus Uniform Model, this is -0.4694, demonstrating that the Uniform Model, without knowledge of HLAs, fails to imitate human impressions. For human versus ALOHA, this is 0.4250, demonstrating that our system is able to retrieve character responses somewhat similarly to human impressions. Lastly, for human versus the difference in scores between ALOHA and Uniform Model, this is 0.7815. The difference between ALOHA and the Uniform Model, which is based on the additional knowledge of the HLAs, is hence shown to improve upon the Uniform Model similarly to human impressions. This demonstrates that HLAs are indeed an accurate method of modeling human impressions of character attributes, and also demonstrates that our system, ALOHA, is able to effectively use these HLAs to improve upon dialogue retrieval performance.
Results and Analysis ::: Performance: ALOHA vs. Baselines
ALOHA, combined with the HLAs and dialogue dataset, achieves a significant improvement on the target character language style retrieval task compared to the baseline open-domain chatbot models. As observed from Table TABREF44, ALOHA achieves a significant boost in Hits@n/N accuracy and other metrics for retrieving the correct response of five diverse characters with different identities (see Section SECREF28).
Results and Analysis ::: Performance: ALOHA vs. Uniform Model
We observe a noticeable improvement in performance between ALOHA and the Uniform Model in recovering the language styles of specific characters that is consistent across all five folds (see Tables TABREF44 and TABREF45), indicating that lack of knowledge of HLAs limits the ability of the model to successfully recover the language style of specific characters. We claim that, to the best of our knowledge, we have made the first step in using HLA-based character dialogue clustering to improve upon personality learning for chatbots.
ALOHA demonstrates an accuracy boost for all five evaluation characters, showing that the system is robust and stable and has the ability to recover the dialogue styles of fictional characters regardless of the character's profile and identity, genre of the show, and context of the dialogue.
Results and Analysis ::: Performance: HLA-OG
As observed from Table TABREF44, ALOHA performs slightly better overall compared to ALOHA (No HLA-OG). Table TABREF45 shows that this slight performance increase is consistent across four of the five evaluation characters. In the case of Sheldon, the HLA-OG model performs a bit worse. This is possibly due to the large number of Sheldon's HLAs (217) compared to the other four evaluation characters (average of 93.75), along with the limited amount of HLAs we are using for guidance due to the models' limited memory. In general, HLA Observation Guidance during testing appears to improve upon the performance of ALOHA, but this improvement is minimal.
Conclusion and Future Work
We proposed Human Level Attributes (HLAs) as a novel approach to model human-like attributes of characters, and collected a large volume of dialogue data for various characters with complete and robust profiles. We also proposed and evaluated a system, ALOHA, that uses HLAs to recommend tailored responses traceable to specific characters, and demonstrated its outperformance of the baselines and ability to effectively recover language styles of various characters, showing promise for learning character or personality styles. ALOHA was also shown to be stable regardless of the character's identity, genre of show, and context of dialogue.
Potential directions for future work include training ALOHA with a multi-turn response approach BIBREF0 that tracks dialogue over multiple responses, as we could not acquire multi-turn dialogue data for TV shows. Another potential is the modeling of the dialog counterpart (e.g. the dialogue of other characters speaking to the target character). Further, performing semantic text exchange on the chosen response with a model such as SMERTI BIBREF39 may improve the ability of ALOHA to converse with humans. This is because the response may be context and HLA correct, but incorrect semantically (e.g. the response may say the weather is sunny when it is actually rainy). HLA-aligned generative models is another area of exploration. Typically, generative models produce text that is less fluent, but further work in this area may lead to better results. Lastly, a more diverse and larger participant pool is required due to the limited size of our human evaluation.
|
How big is the difference in performance between proposed model and baselines?
|
Metric difference between Aloha and best baseline score:
Hits@1/20: +0.061 (0.3642 vs 0.3032)
MRR: +0.0572(0.5114 vs 0.4542)
F1: -0.0484 (0.3901 vs 0.4385)
BLEU: +0.0474 (0.2867 vs 0.2393)
| 5,151
|
qasper
|
8k
|
Introduction
People are increasingly using social networking platforms such as Twitter, Facebook, YouTube, etc. to communicate their opinions and share information. Although the interactions among users on these platforms can lead to constructive conversations, they have been increasingly exploited for the propagation of abusive language and the organization of hate-based activities BIBREF0, BIBREF1, especially due to the mobility and anonymous environment of these online platforms. Violence attributed to online hate speech has increased worldwide. For example, in the UK, there has been a significant increase in hate speech towards the immigrant and Muslim communities following the UK's leaving the EU and the Manchester and London attacks. The US also has been a marked increase in hate speech and related crime following the Trump election. Therefore, governments and social network platforms confronting the trend must have tools to detect aggressive behavior in general, and hate speech in particular, as these forms of online aggression not only poison the social climate of the online communities that experience it, but can also provoke physical violence and serious harm BIBREF1.
Recently, the problem of online abusive detection has attracted scientific attention. Proof of this is the creation of the third Workshop on Abusive Language Online or Kaggle’s Toxic Comment Classification Challenge that gathered 4,551 teams in 2018 to detect different types of toxicities (threats, obscenity, etc.). In the scope of this work, we mainly focus on the term hate speech as abusive content in social media, since it can be considered a broad umbrella term for numerous kinds of insulting user-generated content. Hate speech is commonly defined as any communication criticizing a person or a group based on some characteristics such as gender, sexual orientation, nationality, religion, race, etc. Hate speech detection is not a stable or simple target because misclassification of regular conversation as hate speech can severely affect users’ freedom of expression and reputation, while misclassification of hateful conversations as unproblematic would maintain the status of online communities as unsafe environments BIBREF2.
To detect online hate speech, a large number of scientific studies have been dedicated by using Natural Language Processing (NLP) in combination with Machine Learning (ML) and Deep Learning (DL) methods BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF0. Although supervised machine learning-based approaches have used different text mining-based features such as surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platform-based metadata BIBREF8, BIBREF9, BIBREF10, they necessitate a well-defined feature extraction approach. The trend now seems to be changing direction, with deep learning models being used for both feature extraction and the training of classifiers. These newer models are applying deep learning approaches such as Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), etc.BIBREF6, BIBREF0 to enhance the performance of hate speech detection models, however, they still suffer from lack of labelled data or inability to improve generalization property.
Here, we propose a transfer learning approach for hate speech understanding using a combination of the unsupervised pre-trained model BERT BIBREF11 and some new supervised fine-tuning strategies. As far as we know, it is the first time that such exhaustive fine-tuning strategies are proposed along with a generative pre-trained language model to transfer learning to low-resource hate speech languages and improve performance of the task. In summary:
We propose a transfer learning approach using the pre-trained language model BERT learned on English Wikipedia and BookCorpus to enhance hate speech detection on publicly available benchmark datasets. Toward that end, for the first time, we introduce new fine-tuning strategies to examine the effect of different embedding layers of BERT in hate speech detection.
Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT's transformers outperforms previous works in terms of precision, recall, and F1-score. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using pre-trained BERT model for debiasing hate speech datasets in future studies.
Previous Works
Here, the existing body of knowledge on online hate speech and offensive language and transfer learning is presented.
Online Hate Speech and Offensive Language: Researchers have been studying hate speech on social media platforms such as Twitter BIBREF9, Reddit BIBREF12, BIBREF13, and YouTube BIBREF14 in the past few years. The features used in traditional machine learning approaches are the main aspects distinguishing different methods, and surface-level features such as bag of words, word-level and character-level $n$-grams, etc. have proven to be the most predictive features BIBREF3, BIBREF4, BIBREF5. Apart from features, different algorithms such as Support Vector Machines BIBREF15, Naive Baye BIBREF1, and Logistic Regression BIBREF5, BIBREF9, etc. have been applied for classification purposes. Waseem et al. BIBREF5 provided a test with a list of criteria based on the work in Gender Studies and Critical Race Theory (CRT) that can annotate a corpus of more than $16k$ tweets as racism, sexism, or neither. To classify tweets, they used a logistic regression model with different sets of features, such as word and character $n$-grams up to 4, gender, length, and location. They found that their best model produces character $n$-gram as the most indicative features, and using location or length is detrimental. Davidson et al. BIBREF9 collected a $24K$ corpus of tweets containing hate speech keywords and labelled the corpus as hate speech, offensive language, or neither by using crowd-sourcing and extracted different features such as $n$-grams, some tweet-level metadata such as the number of hashtags, mentions, retweets, and URLs, Part Of Speech (POS) tagging, etc. Their experiments on different multi-class classifiers showed that the Logistic Regression with L2 regularization performs the best at this task. Malmasi et al. BIBREF15 proposed an ensemble-based system that uses some linear SVM classifiers in parallel to distinguish hate speech from general profanity in social media.
As one of the first attempts in neural network models, Djuric et al. BIBREF16 proposed a two-step method including a continuous bag of words model to extract paragraph2vec embeddings and a binary classifier trained along with the embeddings to distinguish between hate speech and clean content. Badjatiya et al. BIBREF0 investigated three deep learning architectures, FastText, CNN, and LSTM, in which they initialized the word embeddings with either random or GloVe embeddings. Gambäck et al. BIBREF6 proposed a hate speech classifier based on CNN model trained on different feature embeddings such as word embeddings and character $n$-grams. Zhang et al. BIBREF7 used a CNN+GRU (Gated Recurrent Unit network) neural network model initialized with pre-trained word2vec embeddings to capture both word/character combinations (e. g., $n$-grams, phrases) and word/character dependencies (order information). Waseem et al. BIBREF10 brought a new insight to hate speech and abusive language detection tasks by proposing a multi-task learning framework to deal with datasets across different annotation schemes, labels, or geographic and cultural influences from data sampling. Founta et al. BIBREF17 built a unified classification model that can efficiently handle different types of abusive language such as cyberbullying, hate, sarcasm, etc. using raw text and domain-specific metadata from Twitter. Furthermore, researchers have recently focused on the bias derived from the hate speech training datasets BIBREF18, BIBREF2, BIBREF19. Davidson et al. BIBREF2 showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al. BIBREF19 also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.).
Transfer Learning: Pre-trained vector representations of words, embeddings, extracted from vast amounts of text data have been encountered in almost every language-based tasks with promising results. Two of the most frequently used context-independent neural embeddings are word2vec and Glove extracted from shallow neural networks. The year 2018 has been an inflection point for different NLP tasks thanks to remarkable breakthroughs: Universal Language Model Fine-Tuning (ULMFiT) BIBREF20, Embedding from Language Models (ELMO) BIBREF21, OpenAI’ s Generative Pre-trained Transformer (GPT) BIBREF22, and Google’s BERT model BIBREF11. Howard et al. BIBREF20 proposed ULMFiT which can be applied to any NLP task by pre-training a universal language model on a general-domain corpus and then fine-tuning the model on target task data using discriminative fine-tuning. Peters et al. BIBREF21 used a bi-directional LSTM trained on a specific task to present context-sensitive representations of words in word embeddings by looking at the entire sentence. Radford et al. BIBREF22 and Devlin et al. BIBREF11 generated two transformer-based language models, OpenAI GPT and BERT respectively. OpenAI GPT BIBREF22 is an unidirectional language model while BERT BIBREF11 is the first deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. BERT has two novel prediction tasks: Masked LM and Next Sentence Prediction. The pre-trained BERT model significantly outperformed ELMo and OpenAI GPT in a series of downstream tasks in NLP BIBREF11. Identifying hate speech and offensive language is a complicated task due to the lack of undisputed labelled data BIBREF15 and the inability of surface features to capture the subtle semantics in text. To address this issue, we use the pre-trained language model BERT for hate speech classification and try to fine-tune specific task by leveraging information from different transformer encoders.
Methodology
Here, we analyze the BERT transformer model on the hate speech detection task. BERT is a multi-layer bidirectional transformer encoder trained on the English Wikipedia and the Book Corpus containing 2,500M and 800M tokens, respectively, and has two models named BERTbase and BERTlarge. BERTbase contains an encoder with 12 layers (transformer blocks), 12 self-attention heads, and 110 million parameters whereas BERTlarge has 24 layers, 16 attention heads, and 340 million parameters. Extracted embeddings from BERTbase have 768 hidden dimensions BIBREF11. As the BERT model is pre-trained on general corpora, and for our hate speech detection task we are dealing with social media content, therefore as a crucial step, we have to analyze the contextual information extracted from BERT' s pre-trained layers and then fine-tune it using annotated datasets. By fine-tuning we update weights using a labelled dataset that is new to an already trained model. As an input and output, BERT takes a sequence of tokens in maximum length 512 and produces a representation of the sequence in a 768-dimensional vector. BERT inserts at most two segments to each input sequence, [CLS] and [SEP]. [CLS] embedding is the first token of the input sequence and contains the special classification embedding which we take the first token [CLS] in the final hidden layer as the representation of the whole sequence in hate speech classification task. The [SEP] separates segments and we will not use it in our classification task. To perform the hate speech detection task, we use BERTbase model to classify each tweet as Racism, Sexism, Neither or Hate, Offensive, Neither in our datasets. In order to do that, we focus on fine-tuning the pre-trained BERTbase parameters. By fine-tuning, we mean training a classifier with different layers of 768 dimensions on top of the pre-trained BERTbase transformer to minimize task-specific parameters.
Methodology ::: Fine-Tuning Strategies
Different layers of a neural network can capture different levels of syntactic and semantic information. The lower layer of the BERT model may contain more general information whereas the higher layers contain task-specific information BIBREF11, and we can fine-tune them with different learning rates. Here, four different fine-tuning approaches are implemented that exploit pre-trained BERTbase transformer encoders for our classification task. More information about these transformer encoders' architectures are presented in BIBREF11. In the fine-tuning phase, the model is initialized with the pre-trained parameters and then are fine-tuned using the labelled datasets. Different fine-tuning approaches on the hate speech detection task are depicted in Figure FIGREF8, in which $X_{i}$ is the vector representation of token $i$ in a tweet sample, and are explained in more detail as follows:
1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. In this architecture, only the [CLS] token output provided by BERT is used. The [CLS] output, which is equivalent to the [CLS] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify.
2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure FIGREF8.
3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function.
4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERTbase model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed.
Experiments and Results
We first introduce datasets used in our study and then investigate the different fine-tuning strategies for hate speech detection task. We also include the details of our implementation and error analysis in the respective subsections.
Experiments and Results ::: Dataset Description
We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15.
Experiments and Results ::: Pre-Processing
We find mentions of users, numbers, hashtags, URLs and common emoticons and replace them with the tokens <user>,<number>,<hashtag>,<url>,<emoticon>. We also find elongated words and convert them into short and standard format; for example, converting yeeeessss to yes. With hashtags that include some tokens without any with space between them, we replace them by their textual counterparts; for example, we convert hashtag “#notsexist" to “not sexist". All punctuation marks, unknown uni-codes and extra delimiting characters are removed, but we keep all stop words because our model trains the sequence of words in a text directly. We also convert all tweets to lower case.
Experiments and Results ::: Implementation and Results Analysis
For the implementation of our neural network, we used pytorch-pretrained-bert library containing the pre-trained BERT model, text tokenizer, and pre-trained WordPiece. As the implementation environment, we use Google Colaboratory tool which is a free research tool with a Tesla K80 GPU and 12G RAM. Based on our experiments, we trained our classifier with a batch size of 32 for 3 epochs. The dropout probability is set to 0.1 for all layers. Adam optimizer is used with a learning rate of 2e-5. As an input, we tokenized each tweet with the BERT tokenizer. It contains invalid characters removal, punctuation splitting, and lowercasing the words. Based on the original BERT BIBREF11, we split words to subword units using WordPiece tokenization. As tweets are short texts, we set the maximum sequence length to 64 and in any shorter or longer length case it will be padded with zero values or truncated to the maximum length.
We consider 80% of each dataset as training data to update the weights in the fine-tuning phase, 10% as validation data to measure the out-of-sample performance of the model during training, and 10% as test data to measure the out-of-sample performance after training. To prevent overfitting, we use stratified sampling to select 0.8, 0.1, and 0.1 portions of tweets from each class (racism/sexism/neither or hate/offensive/neither) for train, validation, and test. Classes' distribution of train, validation, and test datasets are shown in Table TABREF16.
As it is understandable from Tables TABREF16(classdistributionwaseem) and TABREF16(classdistributiondavidson), we are dealing with imbalance datasets with various classes’ distribution. Since hate speech and offensive languages are real phenomena, we did not perform oversampling or undersampling techniques to adjust the classes’ distribution and tried to supply the datasets as realistic as possible. We evaluate the effect of different fine-tuning strategies on the performance of our model. Table TABREF17 summarized the obtained results for fine-tuning strategies along with the official baselines. We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. The evaluation results are reported on the test dataset and on three different metrics: precision, recall, and weighted-average F1-score. We consider weighted-average F1-score as the most robust metric versus class imbalance, which gives insight into the performance of our proposed models. According to Table TABREF17, F1-scores of all BERT based fine-tuning strategies except BERT + nonlinear classifier on top of BERT are higher than the baselines. Using the pre-trained BERT model as initial embeddings and fine-tuning the model with a fully connected linear classifier (BERTbase) outperforms previous baselines yielding F1-score of 81% and 91% for datasets of Waseem and Davidson respectively. Inserting a CNN to pre-trained BERT model for fine-tuning on downstream task provides the best results as F1- score of 88% and 92% for datasets of Waseem and Davidson and it clearly exceeds the baselines. Intuitively, this makes sense that combining all pre-trained BERT layers with a CNN yields better results in which our model uses all the information included in different layers of pre-trained BERT during the fine-tuning phase. This information contains both syntactical and contextual features coming from lower layers to higher layers of BERT.
Experiments and Results ::: Error Analysis
Although we have very interesting results in term of recall, the precision of the model shows the portion of false detection we have. To understand better this phenomenon, in this section we perform a deep analysis on the error of the model. We investigate the test datasets and their confusion matrices resulted from the BERTbase + CNN model as the best fine-tuning approach; depicted in Figures FIGREF19 and FIGREF19. According to Figure FIGREF19 for Waseem-dataset, it is obvious that the model can separate sexism from racism content properly. Only two samples belonging to racism class are misclassified as sexism and none of the sexism samples are misclassified as racism. A large majority of the errors come from misclassifying hateful categories (racism and sexism) as hatless (neither) and vice versa. 0.9% and 18.5% of all racism samples are misclassified as sexism and neither respectively whereas it is 0% and 12.7% for sexism samples. Almost 12% of neither samples are misclassified as racism or sexism. As Figure FIGREF19 makes clear for Davidson-dataset, the majority of errors are related to hate class where the model misclassified hate content as offensive in 63% of the cases. However, 2.6% and 7.9% of offensive and neither samples are misclassified respectively.
To understand better the mislabeled items by our model, we did a manual inspection on a subset of the data and record some of them in Tables TABREF20 and TABREF21. Considering the words such as “daughters", “women", and “burka" in tweets with IDs 1 and 2 in Table TABREF20, it can be understood that our BERT based classifier is confused with the contextual semantic between these words in the samples and misclassified them as sexism because they are mainly associated to femininity. In some cases containing implicit abuse (like subtle insults) such as tweets with IDs 5 and 7, our model cannot capture the hateful/offensive content and therefore misclassifies. It should be noticed that even for a human it is difficult to discriminate against this kind of implicit abuses.
By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga", “faggot", “coon", or “queer", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters).
Conclusion
Conflating hatred content with offensive or harmless language causes online automatic hate speech detection tools to flag user-generated content incorrectly. Not addressing this problem may bring about severe negative consequences for both platforms and users such as decreasement of platforms' reputation or users abandonment. Here, we propose a transfer learning approach advantaging the pre-trained language model BERT to enhance the performance of a hate speech detection system and to generalize it to new datasets. To that end, we introduce new fine-tuning strategies to examine the effect of different layers of BERT in hate speech detection task. The evaluation results indicate that our model outperforms previous works by profiting the syntactical and contextual information embedded in different transformer encoder layers of the BERT model using a CNN-based fine-tuning strategy. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using the pre-trained BERT model to alleviate bias in hate speech datasets in future studies, by investigating a mixture of contextual information embedded in the BERT’s layers and a set of features associated to the different type of biases in data.
|
What evidence do the authors present that the model can capture some biases in data annotation and collection?
|
The authors showed few tweets where neither and implicit hatred content exist but the model was able to discriminate
| 4,119
|
qasper
|
8k
|
Introduction
Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures.
For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 .
In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed.
First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options.
Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training.
Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training
Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust.
Related Work
Several papers address adversarial attacks on NLP systems. Changes to text, whether word- or character-level, are all perceptible, raising some questions about what should rightly be considered an adversarial example BIBREF8 , BIBREF9 . BIBREF10 address the reading comprehension task, showing that by appending distractor sentences to the end of stories from the SQuAD dataset BIBREF11 , they could cause models to output incorrect answers. Inspired by this work, BIBREF12 demonstrate an attack that breaks entailment systems by replacing a single word with either a synonym or its hypernym. Recently, BIBREF13 investigated the problem of producing natural-seeming adversarial examples, noting that adversarial examples in NLP are often ungrammatical BIBREF14 .
In related work on character-level attacks, BIBREF8 , BIBREF15 explored gradient-based methods to generate string edits to fool classification and translation systems, respectively. While their focus is on efficient methods for generating adversaries, ours is on improving the worst case adversarial performance. Similarly, BIBREF9 studied how synthetic and natural noise affects character-level machine translation. They considered structure invariant representations and adversarial training as defenses against such noise. Here, we show that an auxiliary word recognition model, which can be trained on unlabeled data, provides a strong defense.
Spelling correction BIBREF16 is often viewed as a sub-task of grammatical error correction BIBREF17 , BIBREF18 . Classic methods rely on a source language model and a noisy channel model to find the most likely correction for a given word BIBREF19 , BIBREF20 . Recently, neural techniques have been applied to the task BIBREF7 , BIBREF21 , which model the context and orthography of the input together. Our work extends the ScRNN model of BIBREF7 .
Robust Word Recognition
To tackle character-level adversarial attacks, we introduce a simple two-stage solution, placing a word recognition model ( $W$ ) before the downstream classifier ( $C$ ). Under this scheme, all inputs are classified by the composed model $C \circ W$ . This modular approach, with $W$ and $C$ trained separately, offers several benefits: (i) we can deploy the same word recognition model for multiple downstream classification tasks/models; and (ii) we can train the word recognition model with larger unlabeled corpora.
Against adversarial mistakes, two important factors govern the robustness of this combined model: $W$ 's accuracy in recognizing misspelled words and $W$ 's sensitivity to adversarial perturbations on the same input. We discuss these aspects in detail below.
ScRNN with Backoff
We now describe semi-character RNNs for word recognition, explain their limitations, and suggest techniques to improve them.
Inspired by the psycholinguistic studies BIBREF5 , BIBREF4 , BIBREF7 proposed a semi-character based RNN (ScRNN) that processes a sentence of words with misspelled characters, predicting the correct words at each step. Let $s = \lbrace w_1, w_2, \dots , w_n\rbrace $ denote the input sentence, a sequence of constituent words $w_i$ . Each input word ( $w_i$ ) is represented by concatenating (i) a one hot vector of the first character ( $\mathbf {w_{i1}}$ ); (ii) a one hot representation of the last character ( $\mathbf {w_{il}}$ , where $l$ is the length of word $w_i$ ); and (iii) a bag of characters representation of the internal characters ( $\sum _{j=2}^{l-1}\mathbf {w_{ij}})$ . ScRNN treats the first and the last characters individually, and is agnostic to the ordering of the internal characters. Each word, represented accordingly, is then fed into a BiLSTM cell. At each sequence step, the training target is the correct corresponding word (output dimension equal to vocabulary size), and the model is optimized with cross-entropy loss.
While BIBREF7 demonstrate strong word recognition performance, a drawback of their evaluation setup is that they only attack and evaluate on the subset of words that are a part of their training vocabulary. In such a setting, the word recognition performance is unreasonably dependent on the chosen vocabulary size. In principle, one can design models to predict (correctly) only a few chosen words, and ignore the remaining majority and still reach 100% accuracy. For the adversarial setting, rare and unseen words in the wild are particularly critical, as they provide opportunities for the attackers. A reliable word-recognizer should handle these cases gracefully. Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):
Pass-through: word-recognizer passes on the (possibly misspelled) word as is.
Backoff to neutral word: Alternatively, noting that passing $\colorbox {gray!20}{\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.
Backoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially.
Empirically, we find that the background model (by itself) is less accurate, because of the large number of words it is trained to predict. Thus, it is best to train a precise foreground model on an in-domain corpus and focus on frequent words, and then to resort to a general-purpose background model for rare and unobserved words. Next, we delineate our second consideration for building robust word-recognizers.
Model Sensitivity
In computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image ( $|| \bf x - \bf x^{\prime }||_{\infty } < \epsilon $ ). Higher values of $\epsilon $ lead to a higher chance of mis-classification for at least one $\bf x^{\prime }$ . Defense methods such as quantization BIBREF22 and thermometer encoding BIBREF23 try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input.
In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity.
We can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\mathcal {S}$ , let $A(s) = {s_1}^{\prime } , {s_2}^{\prime }, \dots , {s_n}^{\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as
$$S_{W,V}^A=\mathbb {E}_{s}\left[\frac{\#_{u}(V \circ W({s_1}^{\prime }), \dots , V \circ W({s_n}^{\prime }))}{n}\right] ,$$ (Eq. 12)
where $V \circ W (s_i)$ returns the input representation (of the downstream classifier) for the output string produced by the word-recognizer $W$ using $s_i$ and $\#_{u}(\cdot )$ counts the number of unique arguments.
Intuitively, we expect a high value of $S_{W, V}^A$ to lead to a lower robustness of the downstream classifier, since the adversary has more degrees of freedom to attack the classifier. Thus, when using word recognition as a defense, it is prudent to design a low sensitivity system with a low error rate. However, as we will demonstrate, there is often a trade-off between sensitivity and error rate.
Synthesizing Adversarial Attacks
Suppose we are given a classifier $C: \mathcal {S} \rightarrow \mathcal {Y}$ which maps natural language sentences $s \in \mathcal {S}$ to a label from a predefined set $y \in \mathcal {Y}$ . An adversary for this classifier is a function $A$ which maps a sentence $s$ to its perturbed versions $\lbrace s^{\prime }_1, s^{\prime }_2, \ldots , s^{\prime }_{n}\rbrace $ such that each $s^{\prime }_i$ is close to $s$ under some notion of distance between sentences. We define the robustness of classifier $C$ to the adversary $A$ as:
$$R_{C,A} = \mathbb {E}_s \left[\min _{s^{\prime } \in A(s)} \mathbb {1}[C(s^{\prime }) = y]\right],$$ (Eq. 14)
where $y$ represents the ground truth label for $s$ . In practice, a real-world adversary may only be able to query the classifier a few times, hence $R_{C,A}$ represents the worst-case adversarial performance of $C$ . Methods for generating adversarial examples, such as HotFlip BIBREF8 , focus on efficient algorithms for searching the $\min $ above. Improving $R_{C,A}$ would imply better robustness against all these methods.
We explore adversaries which perturb sentences with four types of character-level edits:
(1) Swap: swapping two adjacent internal characters of a word. (2) Drop: removing an internal character of a word. (3) Keyboard: substituting an internal character with adjacent characters of QWERTY keyboard (4) Add: inserting a new character internally in a word. In line with the psycholinguistic studies BIBREF5 , BIBREF4 , to ensure that the perturbations do not affect human ability to comprehend the sentence, we only allow the adversary to edit the internal characters of a word, and not edit stopwords or words shorter than 4 characters.
For 1-character attacks, we try all possible perturbations listed above until we find an adversary that flips the model prediction. For 2-character attacks, we greedily fix the edit which had the least confidence among 1-character attacks, and then try all the allowed perturbations on the remaining words. Higher order attacks can be performed in a similar manner. The greedy strategy reduces the computation required to obtain higher order attacks, but also means that the robustness score is an upper bound on the true robustness of the classifier.
Experiments and Results
In this section, we first discuss our experiments on the word recognition systems.
Word Error Correction
Data: We evaluate the spell correctors from § "Robust Word Recognition" on movie reviews from the Stanford Sentiment Treebank (SST) BIBREF24 . The SST dataset consists of 8544 movie reviews, with a vocabulary of over 16K words. As a background corpus, we use the IMDB movie reviews BIBREF25 , which contain 54K movie reviews, and a vocabulary of over 78K words. The two datasets do not share any reviews in common. The spell-correction models are evaluated on their ability to correct misspellings. The test setting consists of reviews where each word (with length $\ge 4$ , barring stopwords) is attacked by one of the attack types (from swap, add, drop and keyboard attacks). In the all attack setting, we mix all attacks by randomly choosing one for each word. This most closely resembles a real world attack setting.
In addition to our word recognition models, we also compare to After The Deadline (ATD), an open-source spell corrector. We found ATD to be the best freely-available corrector. We refer the reader to BIBREF7 for comparisons of ScRNN to other anonymized commercial spell checkers.
For the ScRNN model, we use a single-layer Bi-LSTM with a hidden dimension size of 50. The input representation consists of 198 dimensions, which is thrice the number of unique characters (66) in the vocabulary. We cap the vocabulary size to 10K words, whereas we use the entire vocabulary of 78470 words when we backoff to the background model. For training these networks, we corrupt the movie reviews according to all attack types, i.e., applying one of the 4 attack types to each word, and trying to reconstruct the original words via cross entropy loss.
We calculate the word error rates (WER) of each of the models for different attacks and present our findings in Table 2 . Note that ATD incorrectly predicts $11.2$ words for every 100 words (in the `all' setting), whereas, all of the backoff variations of the ScRNN reconstruct better. The most accurate variant involves backing off to the background model, resulting in a low error rate of $6.9\%$ , leading to the best performance on word recognition. This is a $32\%$ relative error reduction compared to the vanilla ScRNN model with a pass-through backoff strategy. We can attribute the improved performance to the fact that there are $5.25\%$ words in the test corpus that are unseen in the training corpus, and are thus only recoverable by backing off to a larger corpus. Notably, only training on the larger background corpus does worse, at $8.7\%$ , since the distribution of word frequencies is different in the background corpus compared to the foreground corpus.
Robustness to adversarial attacks
We use sentiment analysis and paraphrase detection as downstream tasks, as for these two tasks, 1-2 character edits do not change the output labels.
For sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. The first architecture encodes the input sentence into a sequence of embeddings, which are then sequentially processed by a BiLSTM. The first and last states of the BiLSTM are then used by the softmax layer to predict the sentiment of the input. We consider three input formats for this architecture: (1) Word-only: where the input words are encoded using a lookup table; (2) Char-only: where the input words are encoded using a separate single-layered BiLSTM over their characters; and (3) Word $+$ Char: where the input words are encoded using a concatenation of (1) and (2) .
The second architecture uses the fine-tuned BERT model BIBREF26 , with an input format of word-piece tokenization. This model has recently set a new state-of-the-art on several NLP benchmarks, including the sentiment analysis task we consider here. All models are trained and evaluated on the binary version of the sentence-level Stanford Sentiment Treebank BIBREF24 dataset with only positive and negative reviews.
We also consider the task of paraphrase detection. Here too, we make use of the fine-tuned BERT BIBREF26 , which is trained and evaluated on the Microsoft Research Paraphrase Corpus (MRPC) BIBREF27 .
Two common methods for dealing with adversarial examples include: (1) data augmentation (DA) BIBREF28 ; and (2) adversarial training (Adv) BIBREF29 . In DA, the trained model is fine-tuned after augmenting the training set with an equal number of examples randomly attacked with a 1-character edit. In Adv, the trained model is fine-tuned with additional adversarial examples (selected at random) that produce incorrect predictions from the current-state classifier. The process is repeated iteratively, generating and adding newer adversarial examples from the updated classifier model, until the adversarial accuracy on dev set stops improving.
In Table 3 , we examine the robustness of the sentiment models under each attack and defense method. In the absence of any attack or defense, BERT (a word-piece model) performs the best ( $90.3\%$ ) followed by word+char models ( $80.5\%$ ), word-only models ( $79.2\%$ ) and then char-only models ( $70.3\%$ ). However, even single-character attacks (chosen adversarially) can be catastrophic, resulting in a significantly degraded performance of $46\%$ , $57\%$ , $59\%$ and $33\%$ , respectively under the `all' setting.
Intuitively, one might suppose that word-piece and character-level models would be more robust to such attacks given they can make use of the remaining context. However, we find that they are the more susceptible. To see why, note that the word `beautiful' can only be altered in a few ways for word-only models, either leading to an UNK or an existing vocabulary word, whereas, word-piece and character-only models treat each unique character combination differently. This provides more variations that an attacker can exploit. Following similar reasoning, add and key attacks pose a greater threat than swap and drop attacks. The robustness of different models can be ordered as word-only $>$ word+char $>$ char-only $\sim $ word-piece, and the efficacy of different attacks as add $>$ key $>$ drop $>$ swap.
Next, we scrutinize the effectiveness of defense methods when faced against adversarially chosen attacks. Clearly from table 3 , DA and Adv are not effective in this case. We observed that despite a low training error, these models were not able to generalize to attacks on newer words at test time. ATD spell corrector is the most effective on keyboard attacks, but performs poorly on other attack types, particularly the add attack strategy.
The ScRNN model with pass-through backoff offers better protection, bringing back the adversarial accuracy within $5\%$ range for the swap attack. It is also effective under other attack classes, and can mitigate the adversarial effect in word-piece models by $21\%$ , character-only models by $19\%$ , and in word, and word+char models by over $4.5\%$ . This suggests that the direct training signal of word error correction is more effective than the indirect signal of sentiment classification available to DA and Adv for model robustness.
We observe additional gains by using background models as a backoff alternative, because of its lower word error rate (WER), especially, under the swap and drop attacks. However, these gains do not consistently translate in all other settings, as lower WER is necessary but not sufficient. Besides lower error rate, we find that a solid defense should furnish the attacker the fewest options to attack, i.e. it should have a low sensitivity.
As we shall see in section § "Understanding Model Sensitivity" , the backoff neutral variation has the lowest sensitivity due to mapping UNK predictions to a fixed neutral word. Thus, it results in the highest robustness on most of the attack types for all four model classes.
Table 4 shows the accuracy of BERT on 200 examples from the dev set of the MRPC paraphrase detection task under various attack and defense settings. We re-trained the ScRNN model variants on the MRPC training set for these experiments. Again, we find that simple 1-2 character attacks can bring down the accuracy of BERT significantly ( $89\%$ to $31\%$ ). Word recognition models can provide an effective defense, with both our pass-through and neutral variants recovering most of the accuracy. While the neutral backoff model is effective on 2-char attacks, it hurts performance in the no attack setting, since it incorrectly modifies certain correctly spelled entity names. Since the two variants are already effective, we did not train a background model for this task.
Understanding Model Sensitivity
To study model sensitivity, for each sentence, we perturb one randomly-chosen word and replace it with all possible perturbations under a given attack type. The resulting set of perturbed sentences is then fed to the word recognizer (whose sensitivity is to be estimated). As described in equation 12 , we count the number of unique predictions from the output sentences. Two corrections are considered unique if they are mapped differently by the downstream classifier.
The neutral backoff variant has the lowest sensitivity (Table 5 ). This is expected, as it returns a fixed neutral word whenever the ScRNN predicts an UNK, therefore reducing the number of unique outputs it predicts. Open vocabulary (i.e. char-only, word+char, word-piece) downstream classifiers consider every unique combination of characters differently, whereas word-only classifiers internally treat all out of vocabulary (OOV) words alike. Hence, for char-only, word+char, and word-piece models, the pass-through version is more sensitive than the background variant, as it passes words as is (and each combination is considered uniquely). However, for word-only models, pass-through is less sensitive as all the OOV character combinations are rendered identical.
Ideally, a preferred defense is one with low sensitivity and word error rate. In practice, however, we see that a low error rate often comes at the cost of sensitivity. We see this trade-off in Figure 2 , where we plot WER and sensitivity on the two axes, and depict the robustness when using different backoff variants. Generally, sensitivity is the more dominant factor out of the two, as the error rates of the considered variants are reasonably low.
We verify if the sentiment (of the reviews) is preserved with char-level attacks. In a human study with 50 attacked (and subsequently misclassified), and 50 unchanged reviews, it was noted that 48 and 49, respectively, preserved the sentiment.
Conclusion
As character and word-piece inputs become commonplace in modern NLP pipelines, it is worth highlighting the vulnerability they add. We show that minimally-doctored attacks can bring down accuracy of classifiers to random guessing. We recommend word recognition as a safeguard against this and build upon RNN-based semi-character word recognizers. We discover that when used as a defense mechanism, the most accurate word recognition models are not always the most robust against adversarial attacks. Additionally, we highlight the need to control the sensitivity of these models to achieve high robustness.
Acknowledgements
The authors are grateful to Graham Neubig, Eduard Hovy, Paul Michel, Mansi Gupta, and Antonios Anastasopoulos for suggestions and feedback.
|
What is a semicharacter architecture?
|
A semi-character based RNN (ScRNN) treats the first and last characters individually, and is agnostic to the ordering of the internal characters
| 4,186
|
qasper
|
8k
|
Introduction
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
Entity linking (EL), mapping entity mentions in texts to a given knowledge base (KB), serves as a fundamental role in many fields, such as question answering BIBREF0 , semantic search BIBREF1 , and information extraction BIBREF2 , BIBREF3 . However, this task is non-trivial because entity mentions are usually ambiguous. As shown in Figure FIGREF1 , the mention England refers to three entities in KB, and an entity linking system should be capable of identifying the correct entity as England cricket team rather than England and England national football team.
Entity linking is typically broken down into two main phases: (i) candidate generation obtains a set of referent entities in KB for each mention, and (ii) named entity disambiguation selects the possible candidate entity by solving a ranking problem. The key challenge lies in the ranking model that computes the relevance between candidates and the corresponding mentions based on the information both in texts and KBs BIBREF4 . In terms of the features used for ranking, we classify existing EL models into two groups: local models to resolve mentions independently relying on textual context information from the surrounding words BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , and global (collective) models, which are the main focus of this paper, that encourage the target entities of all mentions in a document to be topically coherent BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 .
Global models usually build an entity graph based on KBs to capture coherent entities for all identified mentions in a document, where the nodes are entities, and edges denote their relations. The graph provides highly discriminative semantic signals (e.g., entity relatedness) that are unavailable to local model BIBREF15 . For example (Figure FIGREF1 ), an EL model seemly cannot find sufficient disambiguation clues for the mention England from its surrounding words, unless it utilizes the coherence information of consistent topic “cricket" among adjacent mentions England, Hussain, and Essex. Although the global model has achieved significant improvements, its limitation is threefold:
To mitigate the first limitation, recent EL studies introduce neural network (NN) models due to its amazing feature abstraction and generalization ability. In such models, words/entities are represented by low dimensional vectors in a continuous space, and features for mention as well as candidate entities are automatically learned from data BIBREF4 . However, existing NN-based methods for EL are either local models BIBREF16 , BIBREF17 or merely use word/entity embeddings for feature extraction and rely on another modules for collective disambiguation, which thus cannot fully utilize the power of NN models for collective EL BIBREF18 , BIBREF19 , BIBREF20 .
The second drawback of the global approach has been alleviated through approximate optimization techniques, such as PageRank/random walks BIBREF21 , graph pruning BIBREF22 , ranking SVMs BIBREF23 , or loopy belief propagation (LBP) BIBREF18 , BIBREF24 . However, these methods are not differentiable and thus difficult to be integrated into neural network models (the solution for the first limitation).
To overcome the third issue of inadequate training data, BIBREF17 has explored a massive amount of hyperlinks in Wikipedia, but these potential annotations for EL contain much noise, which may distract a naive disambiguation model BIBREF6 .
In this paper, we propose a novel Neural Collective Entity Linking model (NCEL), which performs global EL combining deep neural networks with Graph Convolutional Network (GCN) BIBREF25 , BIBREF26 that allows flexible encoding of entity graphs. It integrates both local contextual information and global interdependence of mentions in a document, and is efficiently trainable in an end-to-end fashion. Particularly, we introduce attention mechanism to robustly model local contextual information by selecting informative words and filtering out the noise. On the other hand, we apply GCNs to improve discriminative signals of candidate entities by exploiting the rich structure underlying the correct entities. To alleviate the global computations, we propose to convolute on the subgraph of adjacent mentions. Thus, the overall coherence shall be achieved in a chain-like way via a sliding window over the document. To the best of our knowledge, this is the first effort to develop a unified model for neural collective entity linking.
In experiments, we first verify the efficiency of NCEL via theoretically comparing its time complexity with other collective alternatives. Afterwards, we train our neural model using collected Wikipedia hyperlinks instead of dataset-specific annotations, and perform evaluations on five public available benchmarks. The results show that NCEL consistently outperforms various baselines with a favorable generalization ability. Finally, we further present the performance on a challenging dataset WW BIBREF19 as well as qualitative results, investigating the effectiveness of each key module.
Preliminaries and Framework
We denote INLINEFORM0 as a set of entity mentions in a document INLINEFORM1 , where INLINEFORM2 is either a word INLINEFORM3 or a mention INLINEFORM4 . INLINEFORM5 is the entity graph for document INLINEFORM6 derived from the given knowledge base, where INLINEFORM7 is a set of entities, INLINEFORM8 denotes the relatedness between INLINEFORM9 and higher values indicate stronger relations. Based on INLINEFORM10 , we extract a subgraph INLINEFORM11 for INLINEFORM12 , where INLINEFORM13 denotes the set of candidate entities for INLINEFORM14 . Note that we don't include the relations among candidates of the same mention in INLINEFORM15 because these candidates are mutually exclusive in disambiguation.
Formally, we define the entity linking problem as follows: Given a set of mentions INLINEFORM0 in a document INLINEFORM1 , and an entity graph INLINEFORM2 , the goal is to find an assignment INLINEFORM3 .
To collectively find the best assignment, NCEL aims to improve the discriminability of candidates' local features by using entity relatedness within a document via GCN, which is capable of learning a function of features on the graph through shared parameters over all nodes. Figure FIGREF10 shows the framework of NCEL including three main components:
Example As shown in Figure FIGREF10 , for the current mention England, we utilize its surrounding words as local contexts (e.g., surplus), and adjacent mentions (e.g., Hussian) as global information. Collectively, we utilize the candidates of England INLINEFORM0 as well as those entities of its adjacencies INLINEFORM1 to construct feature vectors for INLINEFORM2 and the subgraph of relatedness as inputs of our neural model. Let darker blue indicate higher probability of being predicted, the correct candidate INLINEFORM3 becomes bluer due to its bluer neighbor nodes of other mentions INLINEFORM4 . The dashed lines denote entity relations that have indirect impacts through the sliding adjacent window , and the overall structure shall be achieved via multiple sub-graphs by traversing all mentions.
Before introducing our model, we first describe the component of candidate generation.
Candidate Generation
Similar to previous work BIBREF24 , we use the prior probability INLINEFORM0 of entity INLINEFORM1 conditioned on mention INLINEFORM2 both as a local feature and to generate candidate entities: INLINEFORM3 . We compute INLINEFORM4 based on statistics of mention-entity pairs from: (i) Wikipedia page titles, redirect titles and hyperlinks, (ii) the dictionary derived from a large Web Corpus BIBREF27 , and (iii) the YAGO dictionary with a uniform distribution BIBREF22 . We pick up the maximal prior if a mention-entity pair occurs in different resources. In experiments, to optimize for memory and run time, we keep only top INLINEFORM5 entities based on INLINEFORM6 . In the following two sections, we will present the key components of NECL, namely feature extraction and neural network for collective entity linking.
Feature Extraction
The main goal of NCEL is to find a solution for collective entity linking using an end-to-end neural model, rather than to improve the measurements of local textual similarity or global mention/entity relatedness. Therefore, we use joint embeddings of words and entities at sense level BIBREF28 to represent mentions and its contexts for feature extraction. In this section, we give a brief description of our embeddings followed by our features used in the neural model.
Learning Joint Embeddings of Word and Entity
Following BIBREF28 , we use Wikipedia articles, hyperlinks, and entity outlinks to jointly learn word/mention and entity embeddings in a unified vector space, so that similar words/mentions and entities have similar vectors. To address the ambiguity of words/mentions, BIBREF28 represents each word/mention with multiple vectors, and each vector denotes a sense referring to an entity in KB. The quality of the embeddings is verified on both textual similarity and entity relatedness tasks.
Formally, each word/mention has a global embedding INLINEFORM0 , and multiple sense embeddings INLINEFORM1 . Each sense embedding INLINEFORM2 refers to an entity embedding INLINEFORM3 , while the difference between INLINEFORM4 and INLINEFORM5 is that INLINEFORM6 models the co-occurrence information of an entity in texts (via hyperlinks) and INLINEFORM7 encodes the structured entity relations in KBs. More details can be found in the original paper.
Local Features
Local features focus on how compatible the entity is mentioned in a piece of text (i.e., the mention and the context words). Except for the prior probability (Section SECREF9 ), we define two types of local features for each candidate entity INLINEFORM0 :
String Similarity Similar to BIBREF16 , we define string based features as follows: the edit distance between mention's surface form and entity title, and boolean features indicating whether they are equivalent, whether the mention is inside, starts with or ends with entity title and vice versa.
Compatibility We also measure the compatibility of INLINEFORM0 with the mention's context words INLINEFORM1 by computing their similarities based on joint embeddings: INLINEFORM2 and INLINEFORM3 , where INLINEFORM4 is the context embedding of INLINEFORM5 conditioned on candidate INLINEFORM6 and is defined as the average sum of word global vectors weighted by attentions: INLINEFORM7
where INLINEFORM0 is the INLINEFORM1 -th word's attention from INLINEFORM2 . In this way, we automatically select informative words by assigning higher attention weights, and filter out irrelevant noise through small weights. The attention INLINEFORM3 is computed as follows: INLINEFORM4
where INLINEFORM0 is the similarity measurement, and we use cosine similarity in the presented work. We concatenate the prior probability, string based similarities, compatibility similarities and the embeddings of contexts as well as the entity as the local feature vectors.
Global Features
The key idea of collective EL is to utilize the topical coherence throughout the entire document. The consistency assumption behind it is that: all mentions in a document shall be on the same topic. However, this leads to exhaustive computations if the number of mentions is large. Based on the observation that the consistency attenuates along with the distance between two mentions, we argue that the adjacent mentions might be sufficient for supporting the assumption efficiently.
Formally, we define neighbor mentions as INLINEFORM0 adjacent mentions before and after current mention INLINEFORM1 : INLINEFORM2 , where INLINEFORM3 is the pre-defined window size. Thus, the topical coherence at document level shall be achieved in a chain-like way. As shown in Figure FIGREF10 ( INLINEFORM4 ), mentions Hussain and Essex, a cricket player and the cricket club, provide adequate disambiguation clues to induce the underlying topic “cricket" for the current mention England, which impacts positively on identifying the mention surrey as another cricket club via the common neighbor mention Essex.
A degraded case happens if INLINEFORM0 is large enough to cover the entire document, and the mentions used for global features become the same as the previous work, such as BIBREF21 . In experiments, we heuristically found a suitable INLINEFORM1 which is much smaller than the total number of mentions. The benefits of efficiency are in two ways: (i) to decrease time complexity, and (ii) to trim the entity graph into a fixed size of subgraph that facilitates computation acceleration through GPUs and batch techniques, which will be discussed in Section SECREF24 .
Given neighbor mentions INLINEFORM0 , we extract two types of vectorial global features and structured global features for each candidate INLINEFORM1 :
Neighbor Mention Compatibility Suppose neighbor mentions are topical coherent, a candidate entity shall also be compatible with neighbor mentions if it has a high compatibility score with the current mention, otherwise not. That is, we extract the vectorial global features by computing the similarities between INLINEFORM0 and all neighbor mentions: INLINEFORM1 , where INLINEFORM2 is the mention embedding by averaging the global vectors of words in its surface form: INLINEFORM3 , where INLINEFORM4 are tokenized words of mention INLINEFORM5 .
Subgraph Structure The above features reflect the consistent semantics in texts (i.e., mentions). We now extract structured global features using the relations in KB, which facilitates the inference among candidates to find the most topical coherent subset. For each document, we obtain the entity graph INLINEFORM0 by taking candidate entities of all mentions INLINEFORM1 as nodes, and using entity embeddings to compute their similarities as edges INLINEFORM2 . Then, we extract the subgraph structured features INLINEFORM3 for each entity INLINEFORM4 for efficiency.
Formally, we define the subgraph as: INLINEFORM0 , where INLINEFORM1 . For example (Figure FIGREF1 ), for entity England cricket team, the subgraph contains the relation from it to all candidates of neighbor mentions: England cricket team, Nasser Hussain (rugby union), Nasser Hussain, Essex, Essex County Cricket Club and Essex, New York. To support batch-wise acceleration, we represent INLINEFORM2 in the form of adjacency table based vectors: INLINEFORM3 , where INLINEFORM4 is the number of candidates per mention.
Finally, for each candidate INLINEFORM0 , we concatenate local features and neighbor mention compatibility scores as the feature vector INLINEFORM1 , and construct the subgraph structure representation INLINEFORM2 as the inputs of NCEL.
Neural Collective Entity Linking
NCEL incorporates GCN into a deep neural network to utilize structured graph information for collectively feature abstraction, while differs from conventional GCN in the way of applying the graph. Instead of the entire graph, only a subset of nodes is “visible" to each node in our proposed method, and then the overall structured information shall be reached in a chain-like way. Fixing the size of the subset, NCEL is further speeded up by batch techniques and GPUs, and is efficient to large-scale data.
Graph Convolutional Network
GCNs are a type of neural network model that deals with structured data. It takes a graph as an input and output labels for each node. As a simplification of spectral graph convolutions, the main idea of BIBREF26 is similar to a propagation model: to enhance the features of a node according to its neighbor nodes. The formulation is as follows: INLINEFORM0
where INLINEFORM0 is a normalized adjacent matrix of the input graph with self-connection, INLINEFORM1 and INLINEFORM2 are the hidden states and weights in the INLINEFORM3 -th layer, and INLINEFORM4 is a non-linear activation, such as ReLu.
Model Architecture
As shown in Figure FIGREF10 , NCEL identifies the correct candidate INLINEFORM0 for the mention INLINEFORM1 by using vectorial features as well as structured relatedness with candidates of neighbor mentions INLINEFORM2 . Given feature vector INLINEFORM3 and subgraph representation INLINEFORM4 of each candidate INLINEFORM5 , we stack them as inputs for mention INLINEFORM6 : INLINEFORM7 , and the adjacent matrix INLINEFORM8 , where INLINEFORM9 denotes the subgraph with self-connection. We normalize INLINEFORM10 such that all rows sum to one, denoted as INLINEFORM11 , avoiding the change in the scale of the feature vectors.
Given INLINEFORM0 and INLINEFORM1 , the goal of NCEL is to find the best assignment: INLINEFORM2
where INLINEFORM0 is the output variable of candidates, and INLINEFORM1 is a probability function as follows: INLINEFORM2
where INLINEFORM0 is the score function parameters by INLINEFORM1 . NCEL learns the mapping INLINEFORM2 through a neural network including three main modules: encoder, sub-graph convolution network (sub-GCN) and decoder. Next, we introduce them in turn.
Encoder The function of this module is to integrate different features by a multi-layer perceptron (MLP): INLINEFORM0
where INLINEFORM0 is the hidden states of the current mention, INLINEFORM1 and INLINEFORM2 are trainable parameters and bias. We use ReLu as the non-linear activation INLINEFORM3 .
Sub-Graph Convolution Network Similar to GCN, this module learns to abstract features from the hidden state of the mention itself as well as its neighbors. Suppose INLINEFORM0 is the hidden states of the neighbor INLINEFORM1 , we stack them to expand the current hidden states of INLINEFORM2 as INLINEFORM3 , such that each row corresponds to that in the subgraph adjacent matrix INLINEFORM4 . We define sub-graph convolution as: INLINEFORM5
where INLINEFORM0 is a trainable parameter.
Decoder After INLINEFORM0 iterations of sub-graph convolution, the hidden states integrate both features of INLINEFORM1 and its neighbors. A fully connected decoder maps INLINEFORM2 to the number of candidates as follows: INLINEFORM3
where INLINEFORM0 .
Training
The parameters of network are trained to minimize cross-entropy of the predicted and ground truth INLINEFORM0 : INLINEFORM1
Suppose there are INLINEFORM0 documents in training corpus, each document has a set of mentions INLINEFORM1 , leading to totally INLINEFORM2 mention sets. The overall objective function is as follows: INLINEFORM3
Experiments
To avoid overfitting with some dataset, we train NCEL using collected Wikipedia hyperlinks instead of specific annotated data. We then evaluate the trained model on five different benchmarks to verify the linking precision as well as the generalization ability. Furthermore, we investigate the effectiveness of key modules in NCEL and give qualitative results for comprehensive analysis.
Baselines and Datasets
We compare NCEL with the following state-of-the-art EL methods including three local models and three types of global models:
Local models: He BIBREF29 and Chisholm BIBREF6 beat many global models by using auto-encoders and web links, respectively, and NTEE BIBREF16 achieves the best performance based on joint embeddings of words and entities.
Iterative model: AIDA BIBREF22 links entities by iteratively finding a dense subgraph.
Loopy Belief Propagation: Globerson BIBREF18 and PBoH BIBREF30 introduce LBP BIBREF31 techniques for collective inference, and Ganea BIBREF24 solves the global training problem via truncated fitting LBP.
PageRank/Random Walk: Boosting BIBREF32 , AGDISTISG BIBREF33 , Babelfy BIBREF34 , WAT BIBREF35 , xLisa BIBREF36 and WNED BIBREF19 performs PageRank BIBREF37 or random walk BIBREF38 on the mention-entity graph and use the convergence score for disambiguation.
For fairly comparison, we report the original scores of the baselines in the papers. Following these methods, we evaluate NCEL on the following five datasets: (1) CoNLL-YAGO BIBREF22 : the CoNLL 2003 shared task including testa of 4791 mentions in 216 documents, and testb of 4485 mentions in 213 documents. (2) TAC2010 BIBREF39 : constructed for the Text Analysis Conference that comprises 676 mentions in 352 documents for testing. (3) ACE2004 BIBREF23 : a subset of ACE2004 co-reference documents including 248 mentions in 35 documents, which is annotated by Amazon Mechanical Turk. (4) AQUAINT BIBREF40 : 50 news articles including 699 mentions from three different news agencies. (5) WW BIBREF19 : a new benchmark with balanced prior distributions of mentions, leading to a hard case of disambiguation. It has 6374 mentions in 310 documents automatically extracted from Wikipedia.
Training Details and Running Time Analysis
Training We collect 50,000 Wikipedia articles according to the number of its hyperlinks as our training data. For efficiency, we trim the articles to the first three paragraphs leading to 1,035,665 mentions in total. Using CoNLL-Test A as the development set, we evaluate the trained NCEL on the above benchmarks. We set context window to 20, neighbor mention window to 6, and top INLINEFORM0 candidates for each mention. We use two layers with 2000 and 1 hidden units in MLP encoder, and 3 layers in sub-GCN. We use early stop and fine tune the embeddings. With a batch size of 16, nearly 3 epochs cost less than 15 minutes on the server with 20 core CPU and the GeForce GTX 1080Ti GPU with 12Gb memory. We use standard Precision, Recall and F1 at mention level (Micro) and at the document level (Macro) as measurements.
Complexity Analysis Compared with local methods, the main disadvantage of collective methods is high complexity and expensive costs. Suppose there are INLINEFORM0 mentions in documents on average, among these global models, NCEL not surprisingly has the lowest time complexity INLINEFORM1 since it only considers adjacent mentions, where INLINEFORM2 is the number of sub-GCN layers indicating the iterations until convergence. AIDA has the highest time complexity INLINEFORM3 in worst case due to exhaustive iteratively finding and sorting the graph. The LBP and PageRank/random walk based methods achieve similar high time complexity of INLINEFORM4 mainly because of the inference on the entire graph.
Results on GERBIL
GERBIL BIBREF41 is a benchmark entity annotation framework that aims to provide a unified comparison among different EL methods across datasets including ACE2004, AQUAINT and CoNLL. We compare NCEL with the global models that report the performance on GERBIL.
As shown in Table TABREF26 , NCEL achieves the best performance in most cases with an average gain of 2% on Micro F1 and 3% Macro F1. The baseline methods also achieve competitive results on some datasets but fail to adapt to the others. For example, AIDA and xLisa perform quite well on ACE2004 but poorly on other datasets, or WAT, PBoH, and WNED have a favorable performance on CoNLL but lower values on ACE2004 and AQUAINT. Our proposed method performs consistently well on all datasets that demonstrates the good generalization ability.
Results on TAC2010 and WW
In this section, we investigate the effectiveness of NCEL in the “easy" and “hard" datasets, respectively. Particularly, TAC2010, which has two mentions per document on average (Section SECREF19 ) and high prior probabilities of correct candidates (Figure FIGREF28 ), is regarded as the “easy" case for EL, and WW is the “hard" case since it has the most mentions with balanced prior probabilities BIBREF19 . Besides, we further compare the impact of key modules by removing the following part from NCEL: global features (NCEL-local), attention (NCEL-noatt), embedding features (NCEL-noemb), and the impact of the prior probability (prior).
The results are shown in Table FIGREF28 and Table FIGREF28 . We can see the average linking precision (Micro) of WW is lower than that of TAC2010, and NCEL outperforms all baseline methods in both easy and hard cases. In the “easy" case, local models have similar performance with global models since only little global information is available (2 mentions per document). Besides, NN-based models, NTEE and NCEL-local, perform significantly better than others including most global models, demonstrating that the effectiveness of neural models deals with the first limitation in the introduction.
Impact of NCEL Modules
As shown in Figure FIGREF28 , the prior probability performs quite well in TAC2010 but poorly in WW. Compared with NCEL-local, the global module in NCEL brings more improvements in the “hard" case than that for “easy" dataset, because local features are discriminative enough in most cases of TAC2010, and global information becomes quite helpful when local features cannot handle. That is, our propose collective model is robust and shows a good generalization ability to difficult EL. The improvements by each main module are relatively small in TAC2010, while the modules of attention and embedding features show non-negligible impacts in WW (even worse than local model), mainly because WW contains much noise, and these two modules are effective in improving the robustness to noise and the ability of generalization by selecting informative words and providing more accurate semantics, respectively.
Qualitative Analysis
The results of example in Figure FIGREF1 are shown in Table TABREF30 , which is from CoNLL testa dataset. For mention Essex, although both NCEL and NCEL-local correctly identify entity Essex County Cricket Club, NCEL outputs higher probability due to the enhancement of neighbor mentions. Moreover, for mention England, NCEL-local cannot find enough disambiguation clues from its context words, such as surplus and requirements, and thus assigns a higher probability of 0.42 to the country England according to the prior probability. Collectively, NCEL correctly identifies England cricket team with a probability of 0.72 as compared with 0.20 in NCEL-local with the help of its neighbor mention Essex.
Conclusion
In this paper, we propose a neural model for collective entity linking that is end-to-end trainable. It applies GCN on subgraphs instead of the entire entity graph to efficiently learn features from both local and global information. We design an attention mechanism that endows NCEL robust to noisy data. Trained on collected Wikipedia hyperlinks, NCEL outperforms the state-of-the-art collective methods across five different datasets. Besides, further analysis of the impacts of main modules as well as qualitative results demonstrates its effectiveness.
In the future, we will extend our method into cross-lingual settings to help link entities in low-resourced languages by exploiting rich knowledge from high-resourced languages, and deal with NIL entities to facilitate specific applications.
Acknowledgments
The work is supported by National Key Research and Development Program of China (2017YFB1002101), NSFC key project (U1736204, 61661146007), and THUNUS NExT Co-Lab.
|
How effective is their NCEL approach overall?
|
NCEL consistently outperforms various baselines with a favorable generalization ability
| 4,113
|
qasper
|
8k
|
Introduction
To appear in Proceedings of International Workshop on Health Intelligence (W3PHIAI) of the 34th AAAI Conference on Artificial Intelligence, 2020.
Physician burnout is a growing concern, estimated to be experienced by at least 35% of physicians in the developing world and 50% in the United States BIBREF0. BIBREF1 found that for every hour physicians provide direct clinical facetime to patients, nearly two additional hours are spent on EHR (Electronic Health Records) and administrative or desk work. As per the study conducted by Massachusetts General Physicians Organization (MPGO) BIBREF2 and as reported by BIBREF3, the average time spent on administrative tasks increased from 23.7% in 2014 to 27.9% in 2017. Both the surveys found that time spent on administrative tasks was positively associated with higher likelihood of burnout. Top reasons under administrative burden include working on the ambulatory EHR, handling medication reconciliation (sometimes done by aids), medication renewals, and medical billing and coding. The majority of these reasons revolve around documentation of information exchanged between doctors and patients during the clinical encounters. Automatically extracting such clinical information BIBREF4, BIBREF5 can not only help alleviate the documentation burden on the physician, but also allow them to dedicate more time directly with patients.
Among all the clinical information extraction tasks, Medication Regimen (Medication, dosage, and frequency) extraction is particularly interesting due to its ability to help doctors with medication orders cum renewals, medication reconciliation, potentially verifying the reconciliations for errors, and, other medication-centered EHR documentation tasks. In addition, the same information when provided to patients can help them with better recall of doctor instructions which might aid in compliance with the care plan. This is particularly important given that patients forget or wrongly recollect 40-80% BIBREF6 of what is discussed in the clinic, and accessing EHR data has its own challenges.
Spontaneous clinical conversations happening between a doctor and a patient, have several distinguishing characteristics from a normal monologue or prepared speech: it involves multiple speakers with overlapping dialogues, covers a variety of speech patterns, and the vocabulary can range from colloquial to complex domain-specific language. With recent advancements in Conversational Speech Recognition BIBREF7 rendering the systems less prone to errors, the subsequent challenge of understanding and extracting relevant information from the conversations is receiving increasing research focus BIBREF4, BIBREF8.
In this paper, we focus on local information extraction in transcribed clinical conversations. Specifically, we extract dosage (e.g. 5mg) and frequency (e.g. once a day) for the medications (e.g. aspirin) from these transcripts, collectively referred to as Medication Regimen (MR) extraction. The information extraction is local as we extract the information from a segment of the transcript and not the entire transcript since doing the latter is difficult owing to the long meandering nature of the conversations often with multiple medication regimens and care plans being discussed.
The challenges associated with the Medication Regimen (MR) extraction task include understanding the spontaneous dialog with clinical vocabulary and understanding the relationship between different entities as the discussion can contain multiple medications and dosages (e.g. doctor revising a dosage or reviewing all the current medications).
We frame this problem as a Question Answering (QA) task by generating questions using templates. We base the QA model on pointer-generator networks BIBREF9 augmented with Co-Attentions BIBREF10. In addition, we develop models combining QA and Information Extraction frameworks using multi-decoder (one each for dosage and frequency) architecture.
Lack of availability of a large volume of data is a typical challenge in healthcare. A conversation corpus by itself is a rare commodity in the healthcare data space because of the cost and difficulty in handing (because of data privacy concerns). Moreover, transcribing and labeling the conversations is a costly process as it requires domain-specific medical annotation expertise. To address data shortage and improve the model performance, we investigate different high-performance contextual embeddings (ELMO BIBREF11, BERT BIBREF12 and ClinicalBERT BIBREF13), and pretrain the models on a clinical summarization task. We further investigate the effects of training data size on our models.
On the MR extraction task, ELMo with encoder multi-decoder architecture and BERT with encoder-decoder with encoders pretrained on the summarization task perform the best. The best-performing models improve our baseline's dosage and frequency extractions ROUGE-1 F1 scores from 54.28 and 37.13 to 89.57 and 45.94, respectively.
Using our models, we present the first fully automated system to extract MR tags from spontaneous doctor-patient conversations. We evaluate the system (using our best performing models) on the transcripts generated from Automatic Speech Recognition (ASR) APIs offered by Google and IBM. In Google ASR's transcripts, our best model obtained ROUGE-1 F1 of 71.75 for Dosage extraction (which in this specific case equals to the percentage of times dosage is correct, refer Metrics Section for more details) and 40.13 for Frequency extraction tasks. On qualitative evaluation, we find that for 73.58% of the medications the model can find the correct frequency. These results demonstrate that the research on NLP can be used effectively in a real clinical setting to benefit both doctors and patients
Data
Our dataset consists of a total of 6,693 real doctor-patient conversations recorded in a clinical setting using distant microphones of varying quality. The recordings have an average duration of 9min 28s and have a verbatim transcript of 1,500 words on average (written by the experts). Both the audio and the transcript are de-identified (by removing the identifying information) with digital zeros and [de-identified] tags, respectively. The sentences in the transcript are grounded to the audio with the timestamps of its first and last word.
The transcript of the conversations are annotated with summaries and Medication Regimen tags (MR tags), both grounded using the timestamps of the sentences from the transcript deemed relevant by the expert annotators, refer to Table TABREF1. The transcript for a typical conversation can be quite long, and not easy for many of the high performing deep learning models to act on. Moreover, the medical information about a concept/condition/entity can change during the conversation after a significant time gap. For example, dosage of a medication can be different when discussing current medication the patient is on vs when they are prescribed a different dosage. Hence, we have annotations, that are grounded to a short segment of the transcript.
The summaries (#words - $\mu = 9.7; \sigma = 10.1$) are medically relevant and local. The MR tags are also local and are of the form {Medication Name, Dosage, Frequency}. If dosage ($\mu = 2.0; \sigma = 0$) or frequency ($\mu = 2.1; \sigma = 1.07$) information for a medication is not present in a grounded sentence, the corresponding field in the MR tag will be marked as `none'.
In the MR tags, Medication Name and Dosage (usually a quantity followed by its units) can be relatively easily extracted from the transcript except for the units of the dosage which is sometimes inferred. In contrast, due to high degree of linguistic variation with which Frequency is often expressed, extracting Frequency requires an additional inference step. For example, `take one in the morning and at noon' from the transcript is tagged as `twice a day' in the frequency tag, likewise `take it before sleeping' is tagged as `at night time'.
Out of overall 6,693 files, we set aside a random sample of 423 files (denoted as $\mathcal {D}_{test}$) for final evaluation. The remaining 6,270 files are used for training with 80% train (5016), 10% validation (627), and 10% test (627) split. Overall, the 6,270 files contains 156,186 summaries and 32,000 MR tags out of which 8,654 MR tags contain values for at least one of the Dosage or Frequency, which we used for training to avoid overfitting (the remaining MR tags have both Dosage and Frequency as `none'). Note that we have two test datasets: `10% test' - used to evaluate all the models, and $\mathcal {D}_{test}$ - used to measure the performance of best performing models on ASR transcripts.
Approach
We frame the Medication Regimen extraction problem as a Question Answering (QA) task, which forms the basis for our first approach. It can also be considered as a specific inference or relation extract task, since we extract specific information about an entity (Medication Name), hence our second approach is at the intersection of Question Answering (QA) and Information Extraction (IE) domains. Both the approaches involve using a contiguous segment of the transcript and the Medication Name as input, to find/infer the medication's Dosage and Frequency. When testing the approaches mimicking real-world conditions, we extract Medication Name from the transcript separately using ontology, refer to SECREF19.
In the first approach, we frame the MR task as a QA task and generate questions using the template: “What is the $ <$dosage/frequency$>$ for $<$Medication Name$>$". Here, we use an abstractive QA model based on pointer-generator networks BIBREF9 augmented with coattention encoder BIBREF10 (QA-PGNet).
In the second approach, we frame the problem as a conditioned IE task, where the information extracted depends on an entity (Medication Name). Here, we use a multi-decoder pointer-generator network augmented with coattention encoder (Multi-decoder QA-PGNet). Instead of using templates to generate questions and using a single decoder to extract different types of information as in the QA approach (which might lead to performance degradation), here we consider separate decoders for extracting specific types of information about an entity $E$ (Medication Name).
Approach ::: Pointer-generator Network (PGNet)
The network is a sequence-to-sequence attention model that can both copy a word from the input $I$ containing $P$ word tokens or generate a word from its vocabulary $vocab$, to produce the output sequence.
First, the tokens of the $I$ are converted to embeddings and are fed one-by-one to the encoder, a single bi-LSTM layer, which encodes the tokens in $I$ into a sequence of hidden states - $H=encoder(I)$, where $ H=[h_1...h_P]$.
For each decoder time step $t$, in a loop, we compute, 1) attention $a_t$ (using the last decoder state $s_{t-1}$), over the input tokens $I$, and 2) the decoder state $s_t$ using $a_t$. Then, at each time step, using both $a_t$ and $s_t$ we can find the probability $P_t(w)$, of producing a word $w$ (from both $vocab$ and $I$). For convenience, we denote the attention and the decoder as $decoder_{pg}(H)=P(w)$, where $P(w)=[P_1(w)...P_T(w)]$. The output can then be decoded from $P(w)$, which is decoded until it produces an `end of output token' or the number of steps reach the maximum allowed limit.
Approach ::: QA PGNet
We first encode both the question - $H_Q = encoder(Q)$, and the input - $H_I = encoder(I)$, separately using encoders (with shared weights). Then, to condition $I$ on $Q$ (and vice versa), we use the coattention encoder BIBREF10 which attends to both the $I$ and $Q$ simultaneously to generates the coattention context - $C_D = coatt(H_I, H_Q)$. Finally, using the pointer-generator decoder we find the probability distribution of the output sequence - $P(w) = decoder_{pg}([H_I; C_D])$, which is then decoded to generate the answer sequence.
Approach ::: Multi-decoder (MD) QA PGNet
After encoding the inputs into $H_I$ and $H_E$, for extracting $K$ types of information about an entity in an IE fashion, we use the following multi-decoder (MD) setup:
Predictions for each of the $K$ decoders are then decoded using $P^k(w)$.
All the networks discussed above are trained using a negative log-likelihood loss for the target word at each time step and summed over all the decoder time steps.
Experiments
We initialized MR extraction models' vocabulary from the training dataset after removing words with a frequency lower than 30 in the dataset, resulting in 456 words. Our vocabulary is small because of the size of the dataset, hence we rely on the model's ability to copy words to produce the output effectively. In all our model variations, the embedding and the network's hidden dimension are set to be equal. The networks were trained with a learning rate of 0.0015, dropout of 0.5 on the embedding layer, normal gradient clipping set at 2, batch size of 8, and optimized with Adagrad BIBREF15 and the training was stopped using the $10\%$ validation dataset.
Experiments ::: Data Processing
We did the following basic preprocessing to our data, 1) added `none' to the beginning of the input utterance, so the network could point to it when there was no relevant information in the input, 2) filtered outliers with a large number of grounded transcript sentences ($>$150 words), and 3) converted all text to lower case.
To improve performance, we 1) standardized all numbers (both digits and words) to words concatenated with a hyphen (e.g. 110 -$>$ one-hundred-ten), in both input and output, 2) removed units from Dosage as sometimes the units were not explicitly mentioned in the transcript segment but were written by the annotators using domain knowledge, 3) prepended all medication mentions with `rx-' tag, as this helps model's performance when multiple medications are discussed in a segment (in both input and output), and 4) when a transcript segment has multiple medications or dosages being discussed we randomly shuffle them (in both input and output) and create a new data point, to increases the number of training data points. Randomly shuffling the entities increases the number of training MR tags from 8,654 to 11,521. Based on the data statistics after data processing, we fixed the maximum encoder steps to 100, dosage decoder steps to 1, and frequency decoder steps to 3 (for both the QA and Multi-decoder QA models).
Experiments ::: Metrics
For the MR extraction task, we measure the ROUGE-1 scores BIBREF14 for both the Dosage and Frequency extraction tasks. It should be noted that since Dosage is a single word token (after processing), both the reference and hypothesis are a single token, making its ROUGE-1 F1, Precision and Recall scores equal and be equal to percentage of times we find the correct dosage for the medications.
In our annotations, Frequency has conflicting tags (e.g. {`Once a day', `twice a day'} and `daily'), hence metrics like Exact Match will be erroneous. To address this issue, we use the ROUGE scores to compare different models on the 10% test dataset and we use qualitative evaluation to measure the top-performing models on $\mathcal {D}_{test}$.
Experiments ::: Model variations
We consider QA PGNet and Multi-decoder QA PGNet with lookup table embedding as baseline models and improve on the baselines with other variations described below.
Apart from learning-based baselines, we also create two naive baselines, one each for the Dosage and Frequency extraction tasks. For Dosage extraction, the baseline we consider is `Nearest Number', where we take the number nearest to the Medication Name as the prediction, and `none' if no number is mentioned or if the Medication Name is not detected in the input. For Frequency extraction, the baseline we consider is `Random Top-3' where we predict a random Frequency tag, from top-3 most frequent ones from our dataset - {`none', `daily', `twice a day'}.
Embedding: We developed different variations of our models with a simple lookup table embeddings learned from scratch and using high-performance contextual embeddings, which are ELMo BIBREF11, BERT BIBREF16 and ClinicalBERT BIBREF13 (trained and provided by the authors). Refer to Table TABREF5 for the performance comparisons.
We derive embeddings from ELMo by learning a linear combination of its last three layer's hidden states (task-specific fine-tuning BIBREF11). Similarly, for BERT-based embeddings, we take a linear combination of the hidden states from its last four layers, as this combination performs best without increasing the size of the embeddings BIBREF16. Since BERT and ClinicalBERT use word-piece vocabulary and computes sub-word embeddings, we compute word-level embedding by averaging the corresponding sub-word tokens. ELMo and BERT embeddings both have 1024 dimensions, ClinicalBERT have 768 as it is based on BERT base model, and the lookup table have 128 – higher dimension models leads to overfitting.
Pertaining Encoder: We trained the PGNet as a summarization task using the clinical summaries and used the trained model to initialize the encoders (and the embeddings) of the corresponding QA models. We use a vocab size of 4073 words, derived from the training dataset with a frequency threshold of 30 for the task. We trained the models using Adagrad optimizer with a learning rate of 0.015, normal gradient clipping set at 2 and trained for around 150000 iterations (stopped using validation dataset). On the summarization task PGNet obtained ROUGE-1 F1 scores of 41.42 with ELMo and 39.15 with BERT embeddings. We compare the effects of pretraining the model in Table: TABREF5, models with `pretrained encoder' had their encoders and embeddings pretrained with the summarization task.
Results and Discussion ::: Difference in networks and approaches
Embeddings: On Dosage extraction, in general, ELMo obtains better performance than BERT, refer to Table TABREF5. This could be because we concatenated the numbers with a hyphen, and as ELMo uses character-level tokens it can learn the tagging better than BERT, a similar observation is also noted in BIBREF17. On the other hand, on Frequency extraction, without pretraining, ELMo's performance is lagging by a big margin of $\sim $8.5 ROUGE-1 F1 compared to BERT-based embeddings.
Although in cases without encoder pretraining, ClinicalBERT performed the best in the Frequency extraction task (by a small margin), in general, it does not perform as well as BERT. This could also be a reflection of the fact that the language and style of writing used in clinical notes is very different from the way doctors converse with patients and the embedding dimension difference. Lookup table embedding performed decently in the frequency extraction task but lags behind in the Dosage extraction task.
From the metrics and qualitative inspection, we find that the Frequency extraction is an easier task than the Dosage extraction. This is because, in the conversations, frequency information usually occurs in isolation and near the medications, but a medication's dosage can occur 1) near other medication's dosages, 2) with previous dosages (when a dosage for a medication is revised), and 3) after a large number of words from the medication.
Other Variations: Considering various models' performance (without pretraining) and the resource constraint, we choose ELMo and BERT embeddings to analyze the effects of pretraining the encoder. When the network's encoder (and embedding) is pretrained with the summarization task, we 1) see a small decrease in the average number of iterations required for training, 2) improvement in individual performances of all models for both the sub-tasks, and 3) get best performance metrics across all variations, refer to Table TABREF5. Both in terms of performance and the training speed, there is no clear winner between shared and multi-decoder approaches. Medication tagging and data augmentation increase the best-performing model's ROUGE-1 F1 score by $\sim $1.5 for the Dosage extraction task.
We also measure the performance of Multitask Question Answering Network (MQAN) BIBREF18 a QA model trained by the authors on the Decathlon multitask challenge. Since MQAN was not trained to produce the output sequence in our MR tags, it would not be fair to compute ROUGE scores. Instead, we randomly sample the MQAN's predictions from the 10% test dataset and qualitatively evaluate it. From the evaluations, we find that MQAN can not distinguish between frequency and dosage, and mixed the answers. MQAN correctly predicted the dosage for 29.73% and frequency for 24.24% percent of the medications compared to 84.12% and 76.34% for the encoder pretrained BERT QA PGNet model trained on our dataset. This could be because of the difference in the training dataset, domain and the tasks in the Decathlon challenge compared to ours.
Almost all our models perform better than the naive baselines and the ones using lookup table embeddings, and our best performing models outperform them significantly. Among all the variations, the best performing models are ELMo with Multi-decoder (Dosage extraction) and BERT with shared-decoder QA PGNet architecture with pretrained encoder (Frequency extraction). We choose these two models for our subsequent analysis.
Results and Discussion ::: Breakdown of Performance
We categorize the 10% test dataset into different categories based on the complexity and type of the data and analyze the breakdown of the system's performance in Table TABREF11. We breakdown the Frequency extraction into two categories, 1) None: ground truth Frequency tag is `none', and 2) NN (Not None): ground truth Frequency tag is not `none'. Similarly, the Dosage extraction into 5 categories, 1) None: ground truth dosage tag is `none', 2) MM (Multiple Medicine): input segment has more than one Medication mentioned, 3) MN (Multiple Numbers): input segment has more than one number present, and 4) NBM (Number between correct Dosage and Medicine) : between the Medication Name and the correct Dosage in the input segment there are other numbers present. Note that the categories of the Dosage extraction task are not exhaustive, and one tag can belong to multiple categories.
From the performance breakdown of Dosage extraction task, we see that 1) the models predict `none' better than other categories, i.e., the models are correctly able to identify when a medication's dosage is absent, 2) there is performance dip in hard cases (MM, MN, and NBM), 3) the models are able to figure out the correct dosage (decently) for a medication even when there are multiple numbers/dosage present, and 4) the model struggles the most in the NBM category. The models' low performance in NBM could be because we have a comparatively lower number of examples to train in this category. The Frequency extraction task performs equally well when the tag is `none` or not. In most categories, we see an increase in performance when using pretrained encoders.
Results and Discussion ::: Training Dataset Size
We vary the number of MR tags used to train the model and analyze the model's performance when training the networks, using publicly available contextual embeddings, compared to using pretrained embeddings and encoder (pretrained on the summarization task). Out of the 5,016 files in the 80% train dataset only 2,476 have atleast one MR tag. Therefore, out of the 2476 files, we randomly choose 100, 500, and 1000 files and trained the best performing model variations to observe the performance differences, refer to Figure FIGREF12. For all these experiments we used the same vocabulary size (456), the same hyper/training parameters, and the same 10% test split of 627 files.
As expected, we see that the encoder pretrained models have higher performance on all the different training data sizes, i.e., they achieve higher performance on a lower number of data points, refer to Figure FIGREF12. The difference, as expected, shrinks as the training data size increases.
Results and Discussion ::: Evaluating on ASR transcripts
To test the performance of our models on real-world conditions, we use commercially available ASR services (Google and IBM) to transcribe the $\mathcal {D}_{test}$ files and measure the performance of our models without assuming any annotations (except when calculating the metrics). It should be noted that this is not the case in our previous evaluations using `10% test' dataset where we use the segmentation information. For ground truth annotations on ASR transcripts, we aligned the MR tags from human written transcripts to the ASR transcript using their grounded timing information. Additionally, since ASR is prone to errors, during the alignment, if a medication from an MR tag is not recognized correctly in the ASR transcript, we remove the corresponding MR tag.
In our evaluations, we use Google Cloud Speech-to-Text (G-STT) and IBM Watson Speech to Text (IBM-STT) as these were among the top-performing ASR APIs on medical speech BIBREF19 and were readily available to us. We used G-STT, with the `video model' with punctuation settings. Unlike our human written transcripts, the transcript provided by G-STT is not verbatim and does not have disfluencies. IBM-STT, on the other hand, does not give punctuation so we used the speaker changes to add end-of-sentence punctuation.
In our $\mathcal {D}_{test}$ dataset, on initial study we see a Word Error Rate of $\sim $50% for the ASR APIs and this number is not accurate because, 1) of the de-identification, 2) disfluencies (verbatim) difference between the human written and ASR transcript, and 3) minor alignment differences between the audio and the ground truth transcript.
During this evaluation, we followed the same preprocessing methods we used during training. Then, we auto segment the transcript into small contiguous segments similar to the grounded sentences in the annotations for tags extraction. To segment the transcript, we follow a simple procedure. First, we detected all the medications in a transcript using RxNorm BIBREF20 via string matching. For all the detected medications, we selected $2 \le x \le 5$ nearby sentences as the input to our model. We increased $x$ iteratively until we encountered a quantity entity – detected using spaCy's entity recognizer, and we set $x$ as 2 if we did not detect any entities in the range.
We show the model's performance on ASR transcripts and human written transcripts with automatic segmentation, and human written transcripts with human (defined) segmentation, in Table TABREF18. The number of recognized medications in IBM-STT is only 95 compared to 725 (human written), we mainly consider the models' performance on G-STT's transcripts (343).
On the Medications that were recognized correctly, the models can perform decently on ASR transcripts in comparison to human transcripts (within 5 points ROUGE-1 F1 for both tasks, refer to Table TABREF18). This shows that the models are robust to ASR variations discussed above. The lower performance compared to human transcripts is mainly due to incorrect recognition of Dosage and other medications in the same segments (changing the meaning of the text). By comparing the performance of the model on the human written transcripts with human (defined) segmentation and the same with auto segmentation, we see a 10 point drop in Dosage and 6 point drop in Frequency extraction tasks. This points out the need for more sophisticated segmentation algorithms.
With G-STT, our best model obtained ROUGE-1 F1 of 71.75 (which equals to percentage of times dosage is correct in this case) for Dosage extraction and 40.13 for Frequency extraction tasks. To measure the percentage of times the correct frequency was extracted by the model, we qualitatively compared the extracted and predicted frequency. We find that for 73.58% of the medications the model can find the correct frequency from the transcripts.
Conclusion
In this paper, we explore the Medication Regimen (MR) extraction task of extracting dosage and frequency for the medications mentioned in a doctor-patient conversation transcript. We explore different variations of abstractive QA models and new architecture in the intersection of QA and IE frameworks and provide a comparative performance analysis of the methods along with other techniques like pretraining to improve the overall performance. Finally, we demonstrate the performance of our best-performing models by automatically extracting MR tags from spontaneous doctor-patient conversations (using commercially available ASR). Our best model can correctly extract the dosage for 71.75% (interpretation of ROUGE-1 score) and frequency for 73.58% (on qualitative evaluation) of the medications discussed in the transcripts generated using Google Speech-To-Text. In summary, we demonstrate that research on NLP can be translated into real-world clinical settings to realize its benefits for both doctors and patients.
Using ASR transcripts in our training process to improve our performance on both the tasks and extending the medication regimen extraction network to extract other important medical information can be interesting lines of future work.
Acknowledgements
We thank: University of Pittsburgh Medical Center (UPMC), and Abridge AI Inc. for providing access to the de-identified data corpus; Dr. Shivdev Rao, a faculty member and practicing cardiologist in UPMC's Heart and Vascular Institute and Prof. Florian Metze, Associate Research Professor, Carnegie Mellon University for helpful discussions; Ben Schloss, Steven Coleman, and Deborah Osakue for data business development and annotation management.
|
Is the data de-identified?
|
Yes
| 4,592
|
qasper
|
8k
|
Introduction
Over the past two decades, the emergence of social media has enabled the proliferation of traceable human behavior. The content posted by users can reflect who their friends are, what topics they are interested in, or which company they are working for. At the same time, users are listing a number of profile fields to define themselves to others. The utilization of such metadata has proven important in facilitating further developments of applications in advertising BIBREF0 , personalization BIBREF1 , and recommender systems BIBREF2 . However, profile information can be limited, depending on the platform, or it is often deliberately omitted BIBREF3 . To uncloak this information, a number of studies have utilized social media users' footprints to approximate their profiles.
This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach.
In addition, we explore how meaning changes depending on the occupational context. By leveraging word embeddings, we seek to quantify how, for example, cloud might mean a separate concept (e.g., condensed water vapor) in the text written by users that work in environmental jobs while it might be used differently by users in technology occupations (e.g., Internet-based computing).
Specifically, this paper makes four main contributions. First, we build a large, industry-annotated dataset that contains over 20,000 blog users. In addition to their posted text, we also link a number of user metadata including their gender, location, occupation, introduction and interests.
Second, we build content-based classifiers for the industry prediction task and study the effect of incorporating textual features from the users' profile metadata using various meta-classification techniques, significantly improving both the overall accuracy and the average per industry accuracy.
Next, after examining which words are indicative for each industry, we build vector-space representations of word meanings and calculate one deviation for each industry, illustrating how meaning is differentiated based on the users' industries. We qualitatively examine the resulting industry-informed semantic representations of words by listing the words per industry that are most similar to job related and general interest terms.
Finally, we rank the different industries based on the normalized relative frequencies of emotionally charged words (positive and negative) and, in addition, discover that, for both genders, these frequencies do not statistically significantly correlate with an industry's gender dominance ratio.
After discussing related work in Section SECREF2 , we present the dataset used in this study in Section SECREF3 . In Section SECREF4 we evaluate two feature selection methods and examine the industry inference problem using the text of the users' postings. We then augment our content-based classifier by building an ensemble that incorporates several metadata classifiers. We list the most industry indicative words and expose how each industrial semantic field varies with respect to a variety of terms in Section SECREF5 . We explore how the frequencies of emotionally charged words in each gender correlate with the industries and their respective gender dominance ratio and, finally, conclude in Section SECREF6 .
Related Work
Alongside the wide adoption of social media by the public, researchers have been leveraging the newly available data to create and refine models of users' behavior and profiling. There exists a myriad research that analyzes language in order to profile social media users. Some studies sought to characterize users' personality BIBREF9 , BIBREF10 , while others sequenced the expressed emotions BIBREF11 , studied mental disorders BIBREF12 , and the progression of health conditions BIBREF13 . At the same time, a number of researchers sought to predict the social media users' age and/or gender BIBREF14 , BIBREF15 , BIBREF16 , while others targeted and analyzed the ethnicity, nationality, and race of the users BIBREF17 , BIBREF18 , BIBREF19 . One of the profile fields that has drawn a great deal of attention is the location of a user. Among others, Hecht et al. Hecht11 predicted Twitter users' locations using machine learning on nationwide and state levels. Later, Han et al. Han14 identified location indicative words to predict the location of Twitter users down to the city level.
As a separate line of research, a number of studies have focused on discovering the political orientation of users BIBREF15 , BIBREF20 , BIBREF21 . Finally, Li et al. Li14a proposed a way to model major life events such as getting married, moving to a new place, or graduating. In a subsequent study, BIBREF22 described a weakly supervised information extraction method that was used in conjunction with social network information to identify the name of a user's spouse, the college they attended, and the company where they are employed.
The line of work that is most closely related to our research is the one concerned with understanding the relation between people's language and their industry. Previous research from the fields of psychology and economics have explored the potential for predicting one's occupation from their ability to use math and verbal symbols BIBREF23 and the relationship between job-types and demographics BIBREF24 . More recently, Huang et al. Huang15 used machine learning to classify Sina Weibo users to twelve different platform-defined occupational classes highlighting the effect of homophily in user interactions. This work examined only users that have been verified by the Sina Weibo platform, introducing a potential bias in the resulting dataset. Finally, Preotiuc-Pietro et al. Preoctiuc15 predicted the occupational class of Twitter users using the Standard Occupational Classification (SOC) system, which groups the different jobs based on skill requirements. In that work, the data collection process was limited to only users that specifically mentioned their occupation in their self-description in a way that could be directly mapped to a SOC occupational class. The mapping between a substring of their self-description and a SOC occupational class was done manually. Because of the manual annotation step, their method was not scalable; moreover, because they identified the occupation class inside a user self-description, only a very small fraction of the Twitter users could be included (in their case, 5,191 users).
Both of these recent studies are based on micro-blogging platforms, which inherently restrict the number of characters that a post can have, and consequently the way that users can express themselves.
Moreover, both studies used off-the-shelf occupational taxonomies (rather than self-declared occupation categories), resulting in classes that are either too generic (e.g., media, welfare and electronic are three of the twelve Sina Weibo categories), or too intermixed (e.g., an assistant accountant is in a different class from an accountant in SOC). To address these limitations, we investigate the industry prediction task in a large blog corpus consisting of over 20K American users, 40K web-blogs, and 560K blog posts.
Dataset
We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed.
For each of these bloggers, we retrieve all their blogs, and for each of these blogs we download the 21 most recent blog postings. We then clean these blog posts of HTML tags and tokenize them, and drop those bloggers whose cumulative textual content in their posts is less than 600 characters. Following these guidelines, we identified all the U.S. bloggers with completed industry information.
Traditionally, standardized industry taxonomies organize economic activities into groups based on similar production processes, products or services, delivery systems or behavior in financial markets. Following such assumptions and regardless of their many similarities, a tomato farmer would be categorized into a distinct industry from a tobacco farmer. As demonstrated in Preotiuc-Pietro et al. Preoctiuc15 such groupings can cause unwarranted misclassifications.
The Blogger platform provides a total of 39 different industry options. Even though a completed industry value is an implicit text annotation, we acknowledge the same problem noted in previous studies: some categories are too broad, while others are very similar. To remedy this and following Guibert et al. Guibert71, who argued that the denominations used in a classification must reflect the purpose of the study, we group the different Blogger industries based on similar educational background and similar technical terminology. To do that, we exclude very general categories and merge conceptually similar ones. Examples of broad categories are the Education and the Student options: a teacher could be teaching in any concentration, while a student could be enrolled in any discipline. Examples of conceptually similar categories are the Investment Banking and the Banking options.
The final set of categories is shown in Table TABREF1 , along with the number of users in each category. The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset.
Text-based Industry Modeling
After collecting our dataset, we split it into three sets: a train set, a development set, and a test set. The sizes of these sets are 17,880, 2,500, and 2,500 users, respectively, with users randomly assigned to these sets. In all the experiments that follow, we evaluate our classifiers by training them on the train set, configure the parameters and measure performance on the development set, and finally report the prediction accuracy and results on the test set. Note that all the experiments are performed at user level, i.e., all the data for one user is compiled into one instance in our data sets.
To measure the performance of our classifiers, we use the prediction accuracy. However, as shown in Table TABREF1 , the available data is skewed across categories, which could lead to somewhat distorted accuracy numbers depending on how well a model learns to predict the most populous classes. Moreover, accuracy alone does not provide a great deal of insight into the individual performance per industry, which is one of the main objectives in this study. Therefore, in our results below, we report: (1) micro-accuracy ( INLINEFORM0 ), calculated as the percentage of correctly classified instances out of all the instances in the development (test) data; and (2) macro-accuracy ( INLINEFORM1 ), calculated as the average of the per-category accuracies, where the per-category accuracy is the percentage of correctly classified instances out of the instances belonging to one category in the development (test) data.
Leveraging Blog Content
In this section, we seek the effectiveness of using solely textual features obtained from the users' postings to predict their industry.
The industry prediction baseline Majority is set by discovering the most frequently featured class in our training set and picking that class in all predictions in the respective development or testing set.
After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. As seen in Figure FIGREF3 , we can far exceed the Majority baseline performance by incorporating basic language signals into machine learning algorithms (173% INLINEFORM0 improvement).
We additionally explore the potential of improving our text classification task by applying a number of feature ranking methods and selecting varying proportions of top ranked features in an attempt to exclude noisy features. We start by ranking the different features, w, according to their Information Gain Ratio score (IGR) with respect to every industry, i, and training our classifier using different proportions of the top features. INLINEFORM0 INLINEFORM1
Even though we find that using the top 95% of all the features already exceeds the performance of the All Words model on the development data, we further experiment with ranking our features with a more aggressive formula that heavily promotes the features that are tightly associated with any industry category. Therefore, for every word in our training set, we define our newly introduced ranking method, the Aggressive Feature Ranking (AFR), as: INLINEFORM0
In Figure FIGREF3 we illustrate the performance of all four methods in our industry prediction task on the development data. Note that for each method, we provide both the accuracy ( INLINEFORM0 ) and the average per-class accuracy ( INLINEFORM1 ). The Majority and All Words methods apply to all the features; therefore, they are represented as a straight line in the figure. The IGR and AFR methods are applied to varying subsets of the features using a 5% step.
Our experiments demonstrate that the word choice that the users make in their posts correlates with their industry. The first observation in Figure FIGREF3 is that the INLINEFORM0 is proportional to INLINEFORM1 ; as INLINEFORM2 increases, so does INLINEFORM3 . Secondly, the best result on the development set is achieved by using the top 90% of the features using the AFR method. Lastly, the improvements of the IGR and AFR feature selections are not substantially better in comparison to All Words (at most 5% improvement between All Words and AFR), which suggest that only a few noisy features exist and most of the words play some role in shaping the “language" of an industry.
As a final evaluation, we apply on the test data the classifier found to work best on the development data (AFR feature selection, top 90% features), for an INLINEFORM0 of 0.534 and INLINEFORM1 of 0.477.
Leveraging User Metadata
Together with the industry information and the most recent postings of each blogger, we also download a number of accompanying profile elements. Using these additional elements, we explore the potential of incorporating users' metadata in our classifiers.
Table TABREF7 shows the different user metadata we consider together with their coverage percentage (not all users provide a value for all of the profile elements). With the exception of the gender field, the remaining metadata elements shown in Table TABREF7 are completed by the users as a freely editable text field. This introduces a considerable amount of noise in the set of possible metadata values. Examples of noise in the occupation field include values such as “Retired”, “I work.”, or “momma” which are not necessarily informative for our industry prediction task.
To examine whether the metadata fields can help in the prediction of a user's industry, we build classifiers using the different metadata elements. For each metadata element that has a textual value, we use all the words in the training set for that field as features. The only two exceptions are the state field, which is encoded as one feature that can take one out of 50 different values representing the 50 U.S. states; and the gender field, which is encoded as a feature with a distinct value for each user gender option: undefined, male, or female.
As shown in Table TABREF9 , we build four different classifiers using the multinomial NB algorithm: Occu (which uses the words found in the occupation profile element), Intro (introduction), Inter (interests), and Gloc (combined gender, city, state).
In general, all the metadata classifiers perform better than our majority baseline ( INLINEFORM0 of 18.88%). For the Gloc classifier, this result is in alignment with previous studies BIBREF24 . However, the only metadata classifier that outperforms the content classifier is the Occu classifier, which despite missing and noisy occupation values exceeds the content classifier's performance by an absolute 3.2%.
To investigate the promise of combining the five different classifiers we have built so far, we calculate their inter-prediction agreement using Fleiss's Kappa BIBREF25 , as well as the lower prediction bounds using the double fault measure BIBREF26 . The Kappa values, presented in the lower left side of Table TABREF10 , express the classification agreement for categorical items, in this case the users' industry. Lower values, especially values below 30%, mean smaller agreement. Since all five classifiers have better-than-baseline accuracy, this low agreement suggests that their predictions could potentially be combined to achieve a better accumulated result.
Moreover, the double fault measure values, which are presented in the top-right hand side of Table TABREF10 , express the proportion of test cases for which both of the two respective classifiers make false predictions, essentially providing the lowest error bound for the pairwise ensemble classifier performance. The lower those numbers are, the greater the accuracy potential of any meta-classification scheme that combines those classifiers. Once again, the low double fault measure values suggest potential gain from a combination of the base classifiers into an ensemble of models.
After establishing the promise of creating an ensemble of classifiers, we implement two meta-classification approaches. First, we combine our classifiers using features concatenation (or early fusion). Starting with our content-based classifier (Text), we successively add the features derived from each metadata element. The results, both micro- and macro-accuracy, are presented in Table TABREF12 . Even though all these four feature concatenation ensembles outperform the content-based classifier in the development set, they fail to outperform the Occu classifier.
Second, we explore the potential of using stacked generalization (or late fusion) BIBREF27 . The base classifiers, referred to as L0 classifiers, are trained on different folds of the training set and used to predict the class of the remaining instances. Those predictions are then used together with the true label of the training instances to train a second classifier, referred to as the L1 classifier: this L1 is used to produce the final prediction on both the development data and the test data. Traditionally, stacking uses different machine learning algorithms on the same training data. However in our case, we use the same algorithm (multinomial NB) on heterogeneous data (i.e., different types of data such as content, occupation, introduction, interests, gender, city and state) in order to exploit all available sources of information.
The ensemble learning results on the development set are shown in Table TABREF12 . We notice a constant improvement for both metrics when adding more classifiers to our ensemble except for the Gloc classifier, which slightly reduces the performance. The best result is achieved using an ensemble of the Text, Occu, Intro, and Inter L0 classifiers; the respective performance on the test set is an INLINEFORM0 of 0.643 and an INLINEFORM1 of 0.564. Finally, we present in Figure FIGREF11 the prediction accuracy for the final classifier for each of the different industries in our test dataset. Evidently, some industries are easier to predict than others. For example, while the Real Estate and Religion industries achieve accuracy figures above 80%, other industries, such as the Banking industry, are predicted correctly in less than 17% of the time. Anecdotal evidence drawn from the examination of the confusion matrix does not encourage any strong association of the Banking class with any other. The misclassifications are roughly uniform across all other classes, suggesting that the users in the Banking industry use language in a non-distinguishing way.
Qualitative Analysis
In this section, we provide a qualitative analysis of the language of the different industries.
Top-Ranked Words
To conduct a qualitative exploration of which words indicate the industry of a user, Table TABREF14 shows the three top-ranking content words for the different industries using the AFR method.
Not surprisingly, the top ranked words align well with what we would intuitively expect for each industry. Even though most of these words are potentially used by many users regardless of their industry in our dataset, they are still distinguished by the AFR method because of the different frequencies of these words in the text of each industry.
Industry-specific Word Similarities
Next, we examine how the meaning of a word is shaped by the context in which it is uttered. In particular, we qualitatively investigate how the speakers' industry affects meaning by learning vector-space representations of words that take into account such contextual information. To achieve this, we apply the contextualized word embeddings proposed by Bamman et al. Bamman14, which are based on an extension of the “skip-gram" language model BIBREF28 .
In addition to learning a global representation for each word, these contextualized embeddings compute one deviation from the common word embedding representation for each contextual variable, in this case, an industry option. These deviations capture the terms' meaning variations (shifts in the INLINEFORM0 -dimensional space of the representations, where INLINEFORM1 in our experiments) in the text of the different industries, however all the embeddings are in the same vector space to allow for comparisons to one another.
Using the word representations learned for each industry, we present in Table TABREF16 the terms in the Technology and the Tourism industries that have the highest cosine similarity with a job-related word, customers. Similarly, Table TABREF17 shows the words in the Environment and the Tourism industries that are closest in meaning to a general interest word, food. More examples are given in the Appendix SECREF8 .
The terms that rank highest in each industry are noticeably different. For example, as seen in Table TABREF17 , while food in the Environment industry is similar to nutritionally and locally, in the Tourism industry the same word relates more to terms such as delicious and pastries. These results not only emphasize the existing differences in how people in different industries perceive certain terms, but they also demonstrate that those differences can effectively be captured in the resulting word embeddings.
Emotional Orientation per Industry and Gender
As a final analysis, we explore how words that are emotionally charged relate to different industries. To quantify the emotional orientation of a text, we use the Positive Emotion and Negative Emotion categories in the Linguistic Inquiry and Word Count (LIWC) dictionary BIBREF29 . The LIWC dictionary contains lists of words that have been shown to correlate with the psychological states of people that use them; for example, the Positive Emotion category contains words such as “happy,” “pretty,” and “good.”
For the text of all the users in each industry we measure the frequencies of Positive Emotion and Negative Emotion words normalized by the text's length. Table TABREF20 presents the industries' ranking for both categories of words based on their relative frequencies in the text of each industry.
We further perform a breakdown per-gender, where we once again calculate the proportion of emotionally charged words in each industry, but separately for each gender. We find that the industry rankings of the relative frequencies INLINEFORM0 of emotionally charged words for the two genders are statistically significantly correlated, which suggests that regardless of their gender, users use positive (or negative) words with a relative frequency that correlates with their industry. (In other words, even if e.g., Fashion has a larger number of women users, both men and women working in Fashion will tend to use more positive words than the corresponding gender in another industry with a larger number of men users such as Automotive.)
Finally, motivated by previous findings of correlations between job satisfaction and gender dominance in the workplace BIBREF30 , we explore the relationship between the usage of Positive Emotion and Negative Emotion words and the gender dominance in an industry. Although we find that there are substantial gender imbalances in each industry (Appendix SECREF9 ), we did not find any statistically significant correlation between the gender dominance ratio in the different industries and the usage of positive (or negative) emotional words in either gender in our dataset.
Conclusion
In this paper, we examined the task of predicting a social media user's industry. We introduced an annotated dataset of over 20,000 blog users and applied a content-based classifier in conjunction with two feature selection methods for an overall accuracy of up to 0.534, which represents a large improvement over the majority class baseline of 0.188.
We also demonstrated how the user metadata can be incorporated in our classifiers. Although concatenation of features drawn both from blog content and profile elements did not yield any clear improvements over the best individual classifiers, we found that stacking improves the prediction accuracy to an overall accuracy of 0.643, as measured on our test dataset. A more in-depth analysis showed that not all industries are equally easy to predict: while industries such as Real Estate and Religion are clearly distinguishable with accuracy figures over 0.80, others such as Banking are much harder to predict.
Finally, we presented a qualitative analysis to provide some insights into the language of different industries, which highlighted differences in the top-ranked words in each industry, word semantic similarities, and the relative frequency of emotionally charged words.
Acknowledgments
This material is based in part upon work supported by the National Science Foundation (#1344257) and by the John Templeton Foundation (#48503). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation.
Additional Examples of Word Similarities
|
What model did they use for their system?
|
AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier
| 4,177
|
qasper
|
8k
|
Introduction and related work
In recent years there has been increasing interest on the issue of disinformation spreading on online social media. Global concern over false (or "fake") news as a threat to modern democracies has been frequently raised–ever since 2016 US Presidential elections–in correspondence of events of political relevance, where the proliferation of manipulated and low-credibility content attempts to drive and influence people opinions BIBREF0BIBREF1BIBREF2BIBREF3.
Researchers have highlighted several drivers for the diffusion of such malicious phenomenon, which include human factors (confirmation bias BIBREF4, naive realism BIBREF5), algorithmic biases (filter bubble effect BIBREF0), the presence of deceptive agents on social platforms (bots and trolls BIBREF6) and, lastly, the formation of echo chambers BIBREF7 where people polarize their opinions as they are insulated from contrary perspectives.
The problem of automatically detecting online disinformation news has been typically formulated as a binary classification task (i.e. credible vs non-credible articles), and tackled with a variety of different techniques, based on traditional machine learning and/or deep learning, which mainly differ in the dataset and the features they employ to perform the classification. We may distinguish three approaches: those built on content-based features, those based on features extracted from the social context, and those which combine both aspects. A few main challenges hinder the task, namely the impossibility to manually verify all news items, the lack of gold-standard datasets and the adversarial setting in which malicious content is created BIBREF3BIBREF6.
In this work we follow the direction pointed out in a few recent contributions on the diffusion of disinformation compared to traditional and objective information. These have shown that false news spread faster and deeper than true news BIBREF8, and that social bots and echo chambers play an important role in the diffusion of malicious content BIBREF6, BIBREF7. Therefore we focus on the analysis of spreading patterns which naturally arise on social platforms as a consequence of multiple interactions between users, due to the increasing trend in online sharing of news BIBREF0.
A deep learning framework for detection of fake news cascades is provided in BIBREF9, where the authors refer to BIBREF8 in order to collect Twitter cascades pertaining to verified false and true rumors. They employ geometric deep learning, a novel paradigm for graph-based structures, to classify cascades based on four categories of features, such as user profile, user activity, network and spreading, and content. They also observe that a few hours of propagation are sufficient to distinguish false news from true news with high accuracy. Diffusion cascades on Weibo and Twitter are analyzed in BIBREF10, where authors focus on highlighting different topological properties, such as the number of hops from the source or the heterogeneity of the network, to show that fake news shape diffusion networks which are highly different from credible news, even at early stages of propagation.
In this work, we consider the results of BIBREF11 as our baseline. The authors use off-the-shelf machine learning classifiers to accurately classify news articles leveraging Twitter diffusion networks. To this aim, they consider a set of basic features which can be qualitatively interpreted w.r.t to the social behavior of users sharing credible vs non-credible information. Their methodology is overall in accordance with BIBREF12, where authors successfully detect Twitter astroturfing content, i.e. political campaigns disguised as spontaneous grassroots, with a machine learning framework based on network features.
In this paper, we propose a classification framework based on a multi-layer formulation of Twitter diffusion networks. For each article we disentangle different social interactions on Twitter, namely tweets, retweets, mentions, replies and quotes, to accordingly build a diffusion network composed of multiple layers (on for each type of interaction), and we compute structural features separately for each layer. We pick a set of global network properties from the network science toolbox which can be qualitatively explained in terms of social dimensions and allow us to encode different networks with a tuple of features. These include traditional indicators, e.g. network density, number of strong/weak connected components and diameter, and more elaborated ones such as main K-core number BIBREF13 and structural virality BIBREF14. Our main research question is whether the use of a multi-layer, disentangled network yields a significant advance in terms of classification accuracy over a conventional single-layer diffusion network. Additionally, we are interested in understanding which of the above features, and in which layer, are most effective in the classification task.
We perform classification experiments with an off-the-shelf Logistic Regression model on two different datasets of mainstream and disinformation news shared on Twitter respectively in the United States and in Italy during 2019. In the former case we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries.
The outline of this paper is the following: we first formulate the problem and describe data collection, network representation and structural properties employed for the classification; then we provide experimental results–classification performances, layer and feature importance analyses and a temporal classification evaluation–and finally we draw conclusions and future directions.
Methodology ::: Disinformation and mainstream news
In this work we formulate our classification problem as follows: given two classes of news articles, respectively $D$ (disinformation) and $M$ (mainstream), a set of news articles $A_i$ and associated class labels $C_i \in \lbrace D,M\rbrace $, and a set of tweets $\Pi _i=\lbrace T_i^1, T_i^2, ...\rbrace $ each of which contains an Uniform Resource Locator (URL) pointing explicitly to article $A_i$, predict the class $C_i$ of each article $A_i$. There is huge debate and controversy on a proper taxonomy of malicious and deceptive information BIBREF1BIBREF2BIBREF15BIBREF16BIBREF17BIBREF3BIBREF11. In this work we prefer the term disinformation to the more specific fake news to refer to a variety of misleading and harmful information. Therefore, we follow a source-based approach, a consolidated strategy also adopted by BIBREF6BIBREF16BIBREF2BIBREF1, in order to obtain relevant data for our analysis. We collected:
Disinformation articles, published by websites which are well-known for producing low-credibility content, false and misleading news reports as well as extreme propaganda and hoaxes and flagged as such by reputable journalists and fact-checkers;
Mainstream news, referring to traditional news outlets which deliver factual and credible information.
We believe that this is currently the most reliable classification approach, but it entails obvious limitations, as disinformation outlets may also publish true stories and likewise misinformation is sometimes reported on mainstream media. Also, given the choice of news sources, we cannot test whether our methodology is able to classify disinformation vs factual but not mainstream news which are published on niche, non-disinformation outlets.
Methodology ::: US dataset
We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.
As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting.
Methodology ::: Italian dataset
For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\sim $160k mainstream tweets, corresponding to 227 news articles, and $\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process.
The different volumes of news shared on Twitter in the two countries are due both to the different population size of US and Italy (320 vs 60 millions) but also to the different usage of Twitter platform (and social media in general) for news consumption BIBREF24. Both datasets analyzed in this work are available from the authors on request.
A crucial aspect in our approach is the capability to fully capturing sharing cascades on Twitter associated to news articles. It has been reported BIBREF25 that the Twitter streaming endpoint filters out tweets matching a given query if they exceed 1% of the global daily volume of shared tweets, which nowadays is approximately $5\cdot 10^8$; however, as we always collected less than $10^6$ tweets per day, we did not incur in this issue and we thus gathered 100% of tweets matching our query.
Methodology ::: Building diffusion networks
We built Twitter diffusion networks following an approach widely adopted in the literature BIBREF6BIBREF17BIBREF2. We remark that there is an unavoidable limitation in Twitter Streaming API, which does not allow to retrieve true re-tweeting cascades because re-tweets always point to the original source and not to intermediate re-tweeting users BIBREF8BIBREF14; thus we adopt the only viable approach based on Twitter's public availability of data. Besides, by disentangling different interactions with multiple layers we potentially reduce the impact of this limitation on the global network properties compared to the single-layer approach used in our baseline.
Using the notation described in BIBREF26. we employ a multi-layer representation for Twitter diffusion networks. Sociologists have indeed recognized decades ago that it is crucial to study social systems by constructing multiple social networks where different types of ties among same individuals are used BIBREF27. Therefore, for each news article we built a multi-layer diffusion network composed of four different layers, one for each type of social interaction on Twitter platform, namely retweet (RT), reply (R), quote (Q) and mention (M), as shown in Figure FIGREF11. These networks are not necessarily node-aligned, i.e. users might be missing in some layers. We do not insert "dummy" nodes to represent all users as it would have severe impact on the global network properties (e.g. number of weakly connected components). Alternatively one may look at each multi-layer diffusion network as an ensemble of individual graphs BIBREF26; since global network properties are computed separately for each layer, they are not affected by the presence of any inter-layer edges.
In our multi-layer representation, each layer is a directed graph where we add edges and nodes for each tweet of the layer type, e.g. for the RT layer: whenever user $a$ retweets account $b$ we first add nodes $a$ and $b$ if not already present in the RT layer, then we build an edge that goes from $b$ to $a$ if it does not exists or we increment the weight by 1. Similarly for the other layers: for the R layer edges go from user $a$ (who replies) to user $b$, for the Q layer edges go from user $b$ (who is quoted by) to user $a$ and for the M layer edges go from user $a$ (who mentions) to user $b$.
Note that, by construction, our layers do not include isolated nodes; they correspond to "pure tweets", i.e. tweets which have not originated any interactions with other users. However, they are present in our dataset, and their number is exploited for classification, as described below.
Methodology ::: Global network properties
We used a set of global network indicators which allow us to encode each network layer by a tuple of features. Then we simply concatenated tuples as to represent each multi-layer network with a single feature vector. We used the following global network properties:
Number of Strongly Connected Components (SCC): a Strongly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $u,v$ there is a path in each direction ($u\rightarrow v$, $v\rightarrow u$).
Size of the Largest Strongly Connected Component (LSCC): the number of nodes in the largest strongly connected component of a given graph.
Number of Weakly Connected Components (WCC): a Weakly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $(u, v)$ there is a path $u \leftrightarrow v$ ignoring edge directions.
Size of the Largest Weakly Connected Component (LWCC): the number of nodes in the largest weakly connected component of a given graph.
Diameter of the Largest Weakly Connected Component (DWCC): the largest distance (length of the shortest path) between two nodes in the (undirected version of) largest weakly connected component of a graph.
Average Clustering Coefficient (CC): the average of the local clustering coefficients of all nodes in a graph; the local clustering coefficient of a node quantifies how close its neighbours are to being a complete graph (or a clique). It is computed according to BIBREF28.
Main K-core Number (KC): a K-core BIBREF13 of a graph is a maximal sub-graph that contains nodes of internal degree $k$ or more; the main K-core number is the highest value of $k$ (in directed graphs the total degree is considered).
Density (d): the density for directed graphs is $d=\frac{|E|}{|V||V-1|}$, where $|E|$ is the number of edges and $|N|$ is the number of vertices in the graph; the density equals 0 for a graph without edges and 1 for a complete graph.
Structural virality of the largest weakly connected component (SV): this measure is defined in BIBREF14 as the average distance between all pairs of nodes in a cascade tree or, equivalently, as the average depth of nodes, averaged over all nodes in turn acting as a root; for $|V| > 1$ vertices, $SV=\frac{1}{|V||V-1|}\sum _i\sum _j d_{ij}$ where $d_{ij}$ denotes the length of the shortest path between nodes $i$ and $j$. This is equivalent to compute the Wiener's index BIBREF29 of the graph and multiply it by a factor $\frac{1}{|V||V-1|}$. In our case we computed it for the undirected equivalent graph of the largest weakly connected component, setting it to 0 whenever $V=1$.
We used networkx Python package BIBREF30 to compute all features. Whenever a layer is empty. we simply set to 0 all its features. In addition to computing the above nine features for each layer, we added two indicators for encoding information about pure tweets, namely the number T of pure tweets (containing URLs to a given news article) and the number U of unique users authoring those tweets. Therefore, a single diffusion network is represented by a vector with $9\cdot 4+2=38$ entries.
Methodology ::: Interpretation of network features and layers
Aforementioned network properties can be qualitatively explained in terms of social footprints as follows: SCC correlates with the size of the diffusion network, as the propagation of news occurs in a broadcast manner most of the time, i.e. re-tweets dominate on other interactions, while LSCC allows to distinguish cases where such mono-directionality is somehow broken. WCC equals (approximately) the number of distinct diffusion cascades pertaining to each news article, with exceptions corresponding to those cases where some cascades merge together via Twitter interactions such as mentions, quotes and replies, and accordingly LWCC and DWCC equals the size and the depth of the largest cascade. CC corresponds to the level of connectedness of neighboring users in a given diffusion network whereas KC identifies the set of most influential users in a network and describes the efficiency of information spreading BIBREF17. Finally, d describes the proportions of potential connections between users which are actually activated and SV indicates whether a news item has gained popularity with a single and large broadcast or in a more viral fashion through multiple generations.
For what concerns different Twitter actions, users primarily interact with each other using retweets and mentions BIBREF20.
The former are the main engagement activity and act as a form of endorsement, allowing users to rebroadcast content generated by other users BIBREF31. Besides, when node B retweets node A we have an implicit confirmation that information from A appeared in B's Twitter feed BIBREF12. Quotes are simply a special case of retweets with comments.
Mentions usually include personal conversations as they allow someone to address a specific user or to refer to an individual in the third person; in the first case they are located at the beginning of a tweet and they are known as replies, otherwise they are put in the body of a tweet BIBREF20. The network of mentions is usually seen as a stronger version of interactions between Twitter users, compared to the traditional graph of follower/following relationships BIBREF32.
Experiments ::: Setup
We performed classification experiments using a basic off-the-shelf classifier, namely Logistic Regression (LR) with L2 penalty; this also allows us to compare results with our baseline. We applied a standardization of the features and we used the default configuration for parameters as described in scikit-learn package BIBREF33. We also tested other classifiers (such as K-Nearest Neighbors, Support Vector Machines and Random Forest) but we omit results as they give comparable performances. We remark that our goal is to show that a very simple machine learning framework, with no parameter tuning and optimization, allows for accurate results with our network-based approach.
We used the following evaluation metrics to assess the performances of different classifiers (TP=true positives, FP=false positives, FN=false negatives):
Precision = $\frac{TP}{TP+FP}$, the ability of a classifier not to label as positive a negative sample.
Recall = $\frac{TP}{TP+FN}$, the ability of a classifier to retrieve all positive samples.
F1-score = $2 \frac{\mbox{Precision} \cdot \mbox{Recall}}{\mbox{Precision} + \mbox{Recall}}$, the harmonic average of Precision and Recall.
Area Under the Receiver Operating Characteristic curve (AUROC); the Receiver Operating Characteristic (ROC) curve BIBREF34, which plots the TP rate versus the FP rate, shows the ability of a classifier to discriminate positive samples from negative ones as its threshold is varied; the AUROC value is in the range $[0, 1]$, with the random baseline classifier holding AUROC$=0.5$ and the ideal perfect classifier AUROC$=1$; thus larger AUROC values (and steeper ROCs) correspond to better classifiers.
In particular we computed so-called macro average–simple unweighted mean–of these metrics evaluated considering both labels (disinformation and mainstream). We employed stratified shuffle split cross validation (with 10 folds) to evaluate performances.
Finally, we partitioned networks according to the total number of unique users involved in the sharing, i.e. the number of nodes in the aggregated network represented with a single-layer representation considering together all layers and also pure tweets. A breakdown of both datasets according to size class (and political biases for the US scenario) is provided in Table 1 and Table 2.
Experiments ::: Classification performances
In Table 3 we first provide classification performances on the US dataset for the LR classifier evaluated on the size class described in Table 1. We can observe that in all instances our methodology performs better than a random classifier (50% AUROC), with AUROC values above 85% in all cases.
For what concerns political biases, as the classes of mainstream and disinformation networks are not balanced (e.g., 1,292 mainstream and 4,149 disinformation networks with right bias) we employ a Balanced Random Forest with default parameters (as provided in imblearn Python package BIBREF35). In order to test the robustness of our methodology, we trained only on left-biased networks or right-biased networks and tested on the entire set of sources (relative to the US dataset); we provide a comparison of AUROC values for both biases in Figure 4. We can notice that our multi-layer approach still entails significant results, thus showing that it can accurately distinguish mainstream news from disinformation regardless of the political bias. We further corroborated this result with additional classification experiments, that show similar performances, in which we excluded from the training/test set two specific sources (one at a time and both at the same time) that outweigh the others in terms of data samples–respectively "breitbart.com" for right-biased sources and "politicususa.com" for left-biased ones.
We performed classification experiments on the Italian dataset using the LR classifier and different size classes (we excluded $[1000, +\infty )$ which is empty); we show results for different evaluation metrics in Table 3. We can see that despite the limited number of samples (one order of magnitude smaller than the US dataset) the performances are overall in accordance with the US scenario. As shown in Table 4, we obtain results which are much better than our baseline in all size classes (see Table 4):
In the US dataset our multi-layer methodology performs much better in all size classes except for large networks ($[1000, +\infty )$ size class), reaching up to 13% improvement on smaller networks ($[0, 100)$ size class);
In the IT dataset our multi-layer methodology outperforms the baseline in all size classes, with the maximum performance gain (20%) on medium networks ($[100, 1000)$ size class); the baseline generally reaches bad performances compared to the US scenario.
Overall, our performances are comparable with those achieved by two state-of-the-art deep learning models for "fake news" detection BIBREF9BIBREF36.
Experiments ::: Layer importance analysis
In order to understand the impact of each layer on the performances of classifiers, we performed additional experiments considering separately each layer (we ignored T and U features relative to pure tweets). In Table 5 we show metrics for each layer and all size classes, computed with a 10-fold stratified shuffle split cross validation, evaluated on the US dataset; in Figure 5 we show AUROC values for each layer compared with the general multi-layer approach. We can notice that both Q and M layers alone capture adequately the discrepancies of the two distinct news domains in the United States as they obtain good results with AUROC values in the range 75%-86%; these are comparable with those of the multi-layer approach which, nevertheless, outperforms them across all size classes.
We obtained similar performances for the Italian dataset, as the M layer obtains comparable performances w.r.t multi-layer approach with AUROC values in the range 72%-82%. We do not show these results for sake of conciseness.
Experiments ::: Feature importance analysis
We further investigated the importance of each feature by performing a $\chi ^2$ test, with 10-fold stratified shuffle split cross validation, considering the entire range of network sizes $[0, +\infty )$. We show the Top-5 most discriminative features for each country in Table 6.
We can notice the exact same set of features (with different relative orderings in the Top-3) in both countries; these correspond to two global network propertie–LWCC, which indicates the size of the largest cascade in the layer, and SCC, which correlates with the size of the network–associated to the same set of layers (Quotes, Retweets and Mentions).
We further performed a $\chi ^2$ test to highlight the most discriminative features in the M layer of both countries, which performed equally well in the classification task as previously highlighted; also in this case we focused on the entire range of network sizes $[0, +\infty )$. Interestingly, we discovered exactly the same set of Top-3 features in both countries, namely LWCC, SCC and DWCC (which indicates the depth of the largest cascade in the layer).
An inspection of the distributions of all aforementioned features revealed that disinformation news exhibit on average larger values than mainstream news.
We can qualitatively sum up these results as follows:
Sharing patterns in the two news domains exhibit discrepancies which might be country-independent and due to the content that is being shared.
Interactions in disinformation sharing cascades tends to be broader and deeper than in mainstream news, as widely reported in the literature BIBREF8BIBREF2BIBREF7.
Users likely make a different usage of mentions when sharing news belonging to the two domains, consequently shaping different sharing patterns.
Experiments ::: Temporal analysis
Similar to BIBREF9, we carried out additional experiments to answer the following question: how long do we need to observe a news spreading on Twitter in order to accurately classify it as disinformation or mainstream?
With this goal, we built several versions of our original dataset of multi-layer networks by considering in turn the following lifetimes: 1 hour, 6 hours, 12 hours, 1 day, 2 days, 3 days and 7 days; for each case, we computed the global network properties of the corresponding network and evaluated the LR classifier with 10-fold cross validation, separately for each lifetime (and considering always the entire set of networks). We show corresponding AUROC values for both US and IT datasets in Figure 6.
We can see that in both countries news diffusion networks can be accurately classified after just a few hours of spreading, with AUROC values which are larger than 80% after only 6 hours of diffusion. These results are very promising and suggest that articles pertaining to the two news domains exhibit discrepancies in their sharing patterns that can be timely exploited in order to rapidly detect misleading items from factual information.
Conclusions
In this work we tackled the problem of the automatic classification of news articles in two domains, namely mainstream and disinformation news, with a language-independent approach which is based solely on the diffusion of news items on Twitter social platform. We disentangled different types of interactions on Twitter to accordingly build a multi-layer representation of news diffusion networks, and we computed a set of global network properties–separately for each layer–in order to encode each network with a tuple of features. Our goal was to investigate whether a multi-layer representation performs better than one layer BIBREF11, and to understand which of the features, observed at given layers, are most effective in the classification task.
Experiments with an off-the-shelf classifier such as Logistic Regression on datasets pertaining to two different media landscapes (US and Italy) yield very accurate classification results (AUROC up to 94%), even when accounting for the different political bias of news sources, which are far better than our baseline BIBREF11 with improvements up to 20%. Classification performances using single layers show that the layer of mentions alone entails better performance w.r.t other layers in both countries.
We also highlighted the most discriminative features across different layers in both countries; the results suggest that differences between the two news domains might be country-independent but rather due only to the typology of content shared, and that disinformation news shape broader and deeper cascades.
Additional experiments involving the temporal evolution of Twitter diffusion networks show that our methodology can accurate classify mainstream and disinformation news after a few hours of propagation on the platform.
Overall, our results prove that the topological features of multi-layer diffusion networks might be effectively exploited to detect online disinformation. We do not deny the presence of deceptive efforts to orchestrate the regular spread of information on social media via content amplification and manipulation BIBREF37BIBREF38. On the contrary, we postulate that such hidden forces might play to accentuate the discrepancies between the diffusion patterns of disinformation and mainstream news (and thus to make our methodology effective).
In the future we aim to further investigate three directions: (1) employ temporal networks to represent news diffusion and apply classification techniques that take into account the sequential aspect of data (e.g. recurrent neural networks); (2) carry out an extensive comparison of the diffusion of disinformation and mainstream news across countries to investigate deeper the presence of differences and similarities in sharing patterns; (3) leverage our network-based features in addition to state-of-the-art text-based approaches for "fake news" dete ction in order to deliver a real-world system to detect misleading and harmful information spreading on social media.
|
How is the political bias of different sources included in the model?
|
By assigning a political bias label to each news article and training only on left-biased or right-biased outlets of both disinformation and mainstream domains
| 4,882
|
qasper
|
8k
|
Introduction
This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Deep neural networks have been widely used in text classification and have achieved promising results BIBREF0 , BIBREF1 , BIBREF2 . Most focus on content information and use models such as convolutional neural networks (CNN) BIBREF3 or recursive neural networks BIBREF4 . However, for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored. On social media platforms, a user can act either as the author of a post or as a reader who expresses his or her comments about the post.
In this paper, we classify posts taking into account post authorship, likes, topics, and comments. In particular, users and their “likes” hold strong potential for text mining. For example, given a set of posts that are related to a specific topic, a user's likes and dislikes provide clues for stance labeling. From a user point of view, users with positive attitudes toward the issue leave positive comments on the posts with praise or even just the post's content; from a post point of view, positive posts attract users who hold positive stances. We also investigate the influence of topics: different topics are associated with different stance labeling tendencies and word usage. For example we discuss women's rights and unwanted babies on the topic of abortion, but we criticize medicine usage or crime when on the topic of marijuana BIBREF5 . Even for posts on a specific topic like nuclear power, a variety of arguments are raised: green energy, radiation, air pollution, and so on. As for comments, we treat them as additional text information. The arguments in the comments and the commenters (the users who leave the comments) provide hints on the post's content and further facilitate stance classification.
In this paper, we propose the user-topic-comment neural network (UTCNN), a deep learning model that utilizes user, topic, and comment information. We attempt to learn user and topic representations which encode user interactions and topic influences to further enhance text classification, and we also incorporate comment information. We evaluate this model on a post stance classification task on forum-style social media platforms. The contributions of this paper are as follows: 1. We propose UTCNN, a neural network for text in modern social media channels as well as legacy social media, forums, and message boards — anywhere that reveals users, their tastes, as well as their replies to posts. 2. When classifying social media post stances, we leverage users, including authors and likers. User embeddings can be generated even for users who have never posted anything. 3. We incorporate a topic model to automatically assign topics to each post in a single topic dataset. 4. We show that overall, the proposed method achieves the highest performance in all instances, and that all of the information extracted, whether users, topics, or comments, still has its contributions.
Extra-Linguistic Features for Stance Classification
In this paper we aim to use text as well as other features to see how they complement each other in a deep learning model. In the stance classification domain, previous work has showed that text features are limited, suggesting that adding extra-linguistic constraints could improve performance BIBREF6 , BIBREF7 , BIBREF8 . For example, Hasan and Ng as well as Thomas et al. require that posts written by the same author have the same stance BIBREF9 , BIBREF10 . The addition of this constraint yields accuracy improvements of 1–7% for some models and datasets. Hasan and Ng later added user-interaction constraints and ideology constraints BIBREF7 : the former models the relationship among posts in a sequence of replies and the latter models inter-topic relationships, e.g., users who oppose abortion could be conservative and thus are likely to oppose gay rights.
For work focusing on online forum text, since posts are linked through user replies, sequential labeling methods have been used to model relationships between posts. For example, Hasan and Ng use hidden Markov models (HMMs) to model dependent relationships to the preceding post BIBREF9 ; Burfoot et al. use iterative classification to repeatedly generate new estimates based on the current state of knowledge BIBREF11 ; Sridhar et al. use probabilistic soft logic (PSL) to model reply links via collaborative filtering BIBREF12 . In the Facebook dataset we study, we use comments instead of reply links. However, as the ultimate goal in this paper is predicting not comment stance but post stance, we treat comments as extra information for use in predicting post stance.
Deep Learning on Extra-Linguistic Features
In recent years neural network models have been applied to document sentiment classification BIBREF13 , BIBREF4 , BIBREF14 , BIBREF15 , BIBREF2 . Text features can be used in deep networks to capture text semantics or sentiment. For example, Dong et al. use an adaptive layer in a recursive neural network for target-dependent Twitter sentiment analysis, where targets are topics such as windows 7 or taylor swift BIBREF16 , BIBREF17 ; recursive neural tensor networks (RNTNs) utilize sentence parse trees to capture sentence-level sentiment for movie reviews BIBREF4 ; Le and Mikolov predict sentiment by using paragraph vectors to model each paragraph as a continuous representation BIBREF18 . They show that performance can thus be improved by more delicate text models.
Others have suggested using extra-linguistic features to improve the deep learning model. The user-word composition vector model (UWCVM) BIBREF19 is inspired by the possibility that the strength of sentiment words is user-specific; to capture this they add user embeddings in their model. In UPNN, a later extension, they further add a product-word composition as product embeddings, arguing that products can also show different tendencies of being rated or reviewed BIBREF20 . Their addition of user information yielded 2–10% improvements in accuracy as compared to the above-mentioned RNTN and paragraph vector methods. We also seek to inject user information into the neural network model. In comparison to the research of Tang et al. on sentiment classification for product reviews, the difference is two-fold. First, we take into account multiple users (one author and potentially many likers) for one post, whereas only one user (the reviewer) is involved in a review. Second, we add comment information to provide more features for post stance classification. None of these two factors have been considered previously in a deep learning model for text stance classification. Therefore, we propose UTCNN, which generates and utilizes user embeddings for all users — even for those who have not authored any posts — and incorporates comments to further improve performance.
Method
In this section, we first describe CNN-based document composition, which captures user- and topic-dependent document-level semantic representation from word representations. Then we show how to add comment information to construct the user-topic-comment neural network (UTCNN).
User- and Topic-dependent Document Composition
As shown in Figure FIGREF4 , we use a general CNN BIBREF3 and two semantic transformations for document composition . We are given a document with an engaged user INLINEFORM0 , a topic INLINEFORM1 , and its composite INLINEFORM2 words, each word INLINEFORM3 of which is associated with a word embedding INLINEFORM4 where INLINEFORM5 is the vector dimension. For each word embedding INLINEFORM6 , we apply two dot operations as shown in Equation EQREF6 : DISPLAYFORM0
where INLINEFORM0 models the user reading preference for certain semantics, and INLINEFORM1 models the topic semantics; INLINEFORM2 and INLINEFORM3 are the dimensions of transformed user and topic embeddings respectively. We use INLINEFORM4 to model semantically what each user prefers to read and/or write, and use INLINEFORM5 to model the semantics of each topic. The dot operation of INLINEFORM6 and INLINEFORM7 transforms the global representation INLINEFORM8 to a user-dependent representation. Likewise, the dot operation of INLINEFORM9 and INLINEFORM10 transforms INLINEFORM11 to a topic-dependent representation.
After the two dot operations on INLINEFORM0 , we have user-dependent and topic-dependent word vectors INLINEFORM1 and INLINEFORM2 , which are concatenated to form a user- and topic-dependent word vector INLINEFORM3 . Then the transformed word embeddings INLINEFORM4 are used as the CNN input. Here we apply three convolutional layers on the concatenated transformed word embeddings INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 is the index of words; INLINEFORM1 is a non-linear activation function (we use INLINEFORM2 ); INLINEFORM5 is the convolutional filter with input length INLINEFORM6 and output length INLINEFORM7 , where INLINEFORM8 is the window size of the convolutional operation; and INLINEFORM9 and INLINEFORM10 are the output and bias of the convolution layer INLINEFORM11 , respectively. In our experiments, the three window sizes INLINEFORM12 in the three convolution layers are one, two, and three, encoding unigram, bigram, and trigram semantics accordingly.
After the convolutional layer, we add a maximum pooling layer among convolutional outputs to obtain the unigram, bigram, and trigram n-gram representations. This is succeeded by an average pooling layer for an element-wise average of the three maximized convolution outputs.
UTCNN Model Description
Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.
As for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains.
Finally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post.
Experiment
We start with the experimental dataset and then describe the training process as well as the implementation of the baselines. We also implement several variations to reveal the effects of features: authors, likers, comment, and commenters. In the results section we compare our model with related work.
Dataset
We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms.
The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero.
To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach.
The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 .
The FBFans dataset has more integrated functions than the CreateDebate dataset; thus our model can utilize all linguistic and extra-linguistic features. For the CreateDebate dataset, on the other hand, the like and comment features are not available (as there is a stance label for each reply, replies are evaluated as posts as other previous work) but we still implemented our model using the content, author, and topic information.
Settings
In the UTCNN training process, cross-entropy was used as the loss function and AdaGrad as the optimizer. For FBFans dataset, we learned the 50-dimensional word embeddings on the whole dataset using GloVe BIBREF21 to capture the word semantics; for CreateDebate dataset we used the publicly available English 50-dimensional word embeddings, pre-trained also using GloVe. These word embeddings were fixed in the training process. The learning rate was set to 0.03. All user and topic embeddings were randomly initialized in the range of [-0.1 0.1]. Matrix embeddings for users and topics were sized at 250 ( INLINEFORM0 ); vector embeddings for users and topics were set to length 10.
We applied the LDA topic model BIBREF22 on the FBFans dataset to determine the latent topics with which to build topic embeddings, as there is only one general known topic: nuclear power plants. We learned 100 latent topics and assigned the top three topics for each post. For the CreateDebate dataset, which itself constitutes four topics, the topic labels for posts were used directly without additionally applying LDA.
For the FBFans data we report class-based f-scores as well as the macro-average f-score ( INLINEFORM0 ) shown in equation EQREF19 . DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the average precision and recall of the three class. We adopted the macro-average f-score as the evaluation metric for the overall performance because (1) the experimental dataset is severely imbalanced, which is common for contentious issues; and (2) for stance classification, content in minor-class posts is usually more important for further applications. For the CreateDebate dataset, accuracy was adopted as the evaluation metric to compare the results with related work BIBREF7 , BIBREF9 , BIBREF12 .
Baselines
We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced.
Results on FBFans Dataset
In Table TABREF22 we show the results of UTCNN and the baselines on the FBFans dataset. Here Majority yields good performance on Neu since FBFans is highly biased to the neutral class. The SVM models perform well on Sup and Neu but perform poorly for Uns, showing that content information in itself is insufficient to predict stance labels, especially for the minor class. With the transformed word embedding feature, SVM can achieve comparable performance as SVM with n-gram feature. However, the much fewer feature dimension of the transformed word embedding makes SVM with word embeddings a more efficient choice for modeling the large scale social media dataset. For the CNN and RCNN models, they perform slightly better than most of the SVM models but still, the content information is insufficient to achieve a good performance on the Uns posts. As to adding comment information to these models, since the commenters do not always hold the same stance as the author, simply adding comments and post contents together merely adds noise to the model.
Among all UTCNN variations, we find that user information is most important, followed by topic and comment information. UTCNN without user information shows results similar to SVMs — it does well for Sup and Neu but detects no Uns. Its best f-scores on both Sup and Neu among all methods show that with enough training data, content-based models can perform well; at the same time, the lack of user information results in too few clues for minor-class posts to either predict their stance directly or link them to other users and posts for improved performance. The 17.5% improvement when adding user information suggests that user information is especially useful when the dataset is highly imbalanced. All models that consider user information predict the minority class successfully. UCTNN without topic information works well but achieves lower performance than the full UTCNN model. The 4.9% performance gain brought by LDA shows that although it is satisfactory for single topic datasets, adding that latent topics still benefits performance: even when we are discussing the same topic, we use different arguments and supporting evidence. Lastly, we get 4.8% improvement when adding comment information and it achieves comparable performance to UTCNN without topic information, which shows that comments also benefit performance. For platforms where user IDs are pixelated or otherwise hidden, adding comments to a text model still improves performance. In its integration of user, content, and comment information, the full UTCNN produces the highest f-scores on all Sup, Neu, and Uns stances among models that predict the Uns class, and the highest macro-average f-score overall. This shows its ability to balance a biased dataset and supports our claim that UTCNN successfully bridges content and user, topic, and comment information for stance classification on social media text. Another merit of UTCNN is that it does not require a balanced training data. This is supported by its outperforming other models though no oversampling technique is applied to the UTCNN related experiments as shown in this paper. Thus we can conclude that the user information provides strong clues and it is still rich even in the minority class.
We also investigate the semantic difference when a user acts as an author/liker or a commenter. We evaluated a variation in which all embeddings from the same user were forced to be identical (this is the UTCNN shared user embedding setting in Table TABREF22 ). This setting yielded only a 2.5% improvement over the model without comments, which is not statistically significant. However, when separating authors/likers and commenters embeddings (i.e., the UTCNN full model), we achieved much greater improvements (4.8%). We attribute this result to the tendency of users to use different wording for different roles (for instance author vs commenter). This is observed when the user, acting as an author, attempts to support her argument against nuclear power by using improvements in solar power; when acting as a commenter, though, she interacts with post contents by criticizing past politicians who supported nuclear power or by arguing that the proposed evacuation plan in case of a nuclear accident is ridiculous. Based on this finding, in the final UTCNN setting we train two user matrix embeddings for one user: one for the author/liker role and the other for the commenter role.
Results on CreateDebate Dataset
Table TABREF24 shows the results of UTCNN, baselines as we implemented on the FBFans datset and related work on the CreateDebate dataset. We do not adopt oversampling on these models because the CreateDebate dataset is almost balanced. In previous work, integer linear programming (ILP) or linear-chain conditional random fields (CRFs) were proposed to integrate text features, author, ideology, and user-interaction constraints, where text features are unigram, bigram, and POS-dependencies; the author constraint tends to require that posts from the same author for the same topic hold the same stance; the ideology constraint aims to capture inferences between topics for the same author; the user-interaction constraint models relationships among posts via user interactions such as replies BIBREF7 , BIBREF9 .
The SVM with n-gram or average word embedding feature performs just similar to the majority. However, with the transformed word embedding, it achieves superior results. It shows that the learned user and topic embeddings really capture the user and topic semantics. This finding is not so obvious in the FBFans dataset and it might be due to the unfavorable data skewness for SVM. As for CNN and RCNN, they perform slightly better than most SVMs as we found in Table TABREF22 for FBFans.
Compared to the ILP BIBREF7 and CRF BIBREF9 methods, the UTCNN user embeddings encode author and user-interaction constraints, where the ideology constraint is modeled by the topic embeddings and text features are modeled by the CNN. The significant improvement achieved by UTCNN suggests the latent representations are more effective than overt model constraints.
The PSL model BIBREF12 jointly labels both author and post stance using probabilistic soft logic (PSL) BIBREF23 by considering text features and reply links between authors and posts as in Hasan and Ng's work. Table TABREF24 reports the result of their best AD setting, which represents the full joint stance/disagreement collective model on posts and is hence more relevant to UTCNN. In contrast to their model, the UTCNN user embeddings represent relationships between authors, but UTCNN models do not utilize link information between posts. Though the PSL model has the advantage of being able to jointly label the stances of authors and posts, its performance on posts is lower than the that for the ILP or CRF models. UTCNN significantly outperforms these models on posts and has the potential to predict user stances through the generated user embeddings.
For the CreateDebate dataset, we also evaluated performance when not using topic embeddings or user embeddings; as replies in this dataset are viewed as posts, the setting without comment embeddings is not available. Table TABREF24 shows the same findings as Table TABREF22 : the 21% improvement in accuracy demonstrates that user information is the most vital. This finding also supports the results in the related work: user constraints are useful and can yield 11.2% improvement in accuracy BIBREF7 . Further considering topic information yields 3.4% improvement, suggesting that knowing the subject of debates provides useful information. In sum, Table TABREF22 together with Table TABREF24 show that UTCNN achieves promising performance regardless of topic, language, data distribution, and platform.
Conclusion
We have proposed UTCNN, a neural network model that incorporates user, topic, content and comment information for stance classification on social media texts. UTCNN learns user embeddings for all users with minimum active degree, i.e., one post or one like. Topic information obtained from the topic model or the pre-defined labels further improves the UTCNN model. In addition, comment information provides additional clues for stance classification. We have shown that UTCNN achieves promising and balanced results. In the future we plan to explore the effectiveness of the UTCNN user embeddings for author stance classification.
Acknowledgements
Research of this paper was partially supported by Ministry of Science and Technology, Taiwan, under the contract MOST 104-2221-E-001-024-MY2.
|
How many layers does the UTCNN model have?
|
eight layers
| 4,487
|
qasper
|
8k
|
Introduction
Users of photo-sharing websites such as Flickr often provide short textual descriptions in the form of tags to help others find the images. With the availability of GPS systems in current electronic devices such as smartphones, latitude and longitude coordinates are nowadays commonly made available as well. The tags associated with such georeferenced photos often describe the location where these photos were taken, and Flickr can thus be regarded as a source of environmental information. The use of Flickr for modelling urban environments has already received considerable attention. For instance, various approaches have been proposed for modelling urban regions BIBREF0 , and for identifying points-of-interest BIBREF1 and itineraries BIBREF2 , BIBREF3 . However, the usefulness of Flickr for characterizing the natural environment, which is the focus of this paper, is less well-understood.
Many recent studies have highlighted that Flickr tags capture valuable ecological information, which can be used as a complementary source to more traditional sources. To date, however, ecologists have mostly used social media to conduct manual evaluations of image content with little automated exploitation of the associated tags BIBREF4 , BIBREF5 , BIBREF6 . One recent exception is BIBREF7 , where bag-of-words representations derived from Flickr tags were found to give promising result for predicting a range of different environemental phenomena.
Our main hypothesis in this paper is that by using vector space embeddings instead of bag-of-words representations, the ecological information which is implicitly captured by Flickr tags can be utilized in a more effective way. Vector space embeddings are representations in which the objects from a given domain are encoded using relatively low-dimensional vectors. They have proven useful in natural language processing, especially for encoding word meaning BIBREF8 , BIBREF9 , and in machine learning more generally. In this paper, we are interested in the use of such representations for modelling geographic locations. Our main motivation for using vector space embeddings is that they allow us to integrate the textual information we get from Flickr with available structured information in a very natural way. To this end, we rely on an adaptation of the GloVe word embedding model BIBREF9 , but rather than learning word vectors, we learn vectors representing locations. Similar to how the representation of a word in GloVe is determined by the context words surrounding it, the representation of a location in our model is determined by the tags of the photos that have been taken near that location. To incorporate numerical features from structured environmental datasets (e.g. average temperature), we associate with each such feature a linear mapping that can be used to predict that feature from a given location vector. This is inspired by the fact that salient properties of a given domain can often be modelled as directions in vector space embeddings BIBREF10 , BIBREF11 , BIBREF12 . Finally, evidence from categorical datasets (e.g. land cover types) is taken into account by requiring that locations belonging to the same category are represented using similar vectors, similar to how semantic types are sometimes modelled in the context of knowledge graph embedding BIBREF13 .
While our point-of-departure is a standard word embedding model, we found that the off-the-shelf GloVe model performed surprisingly poorly, meaning that a number of modifications are needed to achieve good results. Our main findings are as follows. First, given that the number of tags associated with a given location can be quite small, it is important to apply some kind of spatial smoothing, i.e. the importance of a given tag for a given location should not only depend on the occurrences of the tag at that location, but also on its occurrences at nearby locations. To this end, we use a formulation which is based on spatially smoothed version of pointwise mutual information. Second, given the wide diversity in the kind of information that is covered by Flickr tags, we find that term selection is in some cases critical to obtain vector spaces that capture the relevant aspects of geographic locations. For instance, many tags on Flickr refer to photography related terms, which we would normally not want to affect the vector representation of a given location. Finally, even with these modifications, vector space embeddings learned from Flickr tags alone are sometimes outperformed by bag-of-words representations. However, our vector space embeddings lead to substantially better predictions in cases where structured (scientific) information is also taken into account. In this sense, the main value of using vector space embeddings in this context is not so much about abstracting away from specific tag usages, but rather about the fact that such representations allow us to integrate numerical and categorical features in a much more natural way than is possible with bag-of-words representations.
The remainder of this paper is organized as follows. In the next section, we provide a discussion of existing work. Section SECREF3 then presents our model for embedding geographic locations from Flickr tags and structured data. Next, in Section SECREF4 we provide a detailed discussion about the experimental results. Finally, Section SECREF5 summarizes our conclusions.
Vector space embeddings
The use of low-dimensional vector space embeddings for representing objects has already proven effective in a large number of applications, including natural language processing (NLP), image processing, and pattern recognition. In the context of NLP, the most prominent example is that of word embeddings, which represent word meaning using vectors of typically around 300 dimensions. A large number of different methods for learning such word embeddings have already been proposed, including Skip-gram and the Continuous Bag-of-Words (CBOW) model BIBREF8 , GloVe BIBREF9 , and fastText BIBREF14 . They have been applied effectively in many downstream NLP tasks such as sentiment analysis BIBREF15 , part of speech tagging BIBREF16 , BIBREF17 , and text classification BIBREF18 , BIBREF19 . The model we consider in this paper builds on GloVe, which was designed to capture linear regularities of word-word co-occurrence. In GloVe, there are two word vectors INLINEFORM0 and INLINEFORM1 for each word in the vocabulary, which are learned by minimizing the following objective: DISPLAYFORM0
where INLINEFORM0 is the number of times that word INLINEFORM1 appears in the context of word INLINEFORM2 , INLINEFORM3 is the vocabulary size, INLINEFORM4 is the target word bias, INLINEFORM5 is the context word bias. The weighting function INLINEFORM6 is used to limit the impact of rare terms. It is defined as 1 if INLINEFORM7 and as INLINEFORM8 otherwise, where INLINEFORM9 is usually fixed to 100 and INLINEFORM10 to 0.75. Intuitively, the target word vectors INLINEFORM11 correspond to the actual word representations which we would like to find, while the context word vectors INLINEFORM12 model how occurrences of INLINEFORM13 in the context of a given word INLINEFORM14 affect the representation of this latter word. In this paper we will use a similar model, which will however be aimed at learning location vectors instead of the target word vectors.
Beyond word embeddings, various methods have been proposed for learning vector space representations from structured data such as knowledge graphs BIBREF20 , BIBREF21 , BIBREF22 , social networks BIBREF23 , BIBREF24 and taxonomies BIBREF25 , BIBREF26 . The idea of combining a word embedding model with structured information has also been explored by several authors, for example to improve the word embeddings based on information coming from knowledge graphs BIBREF27 , BIBREF28 . Along similar lines, various lexicons have been used to obtain word embeddings that are better suited at modelling sentiment BIBREF15 and antonymy BIBREF29 , among others. The method proposed by BIBREF30 imposes the condition that words that belong to the same semantic category are closer together than words from different categories, which is somewhat similar in spirit to how we will model categorical datasets in our model.
Embeddings for geographic information
The problem of representing geographic locations using embeddings has also attracted some attention. An early example is BIBREF31 , which used principal component analysis and stacked autoencoders to learn low-dimensional vector representations of city neighbourhoods based on census data. They use these representations to predict attributes such as crime, which is not included in the given census data, and find that in most of the considered evaluation tasks, the low-dimensional vector representations lead to more faithful predictions than the original high-dimensional census data.
Some existing works combine word embedding models with geographic coordinates. For example, in BIBREF32 an approach is proposed to learn word embeddings based on the assumption that words which tend to be used in the same geographic locations are likely to be similar. Note that their aim is dual to our aim in this paper: while they use geographic location to learn word vectors, we use textual descriptions to learn vectors representing geographic locations.
Several methods also use word embedding models to learn representations of Points-of-Interest (POIs) that can be used for predicting user visits BIBREF33 , BIBREF34 , BIBREF35 . These works use the machinery of existing word embedding models to learn POI representations, intuitively by letting sequences of POI visits by a user play the role of sequences of words in a sentence. In other words, despite the use of word embedding models, many of these approaches do not actually consider any textual information. For example, in BIBREF34 the Skip-gram model is utilized to create a global pattern of users' POIs. Each location was treated as a word and the other locations visited before or after were treated as context words. They then use a pair-wise ranking loss BIBREF36 which takes into account the user's location visit frequency to personalize the location recommendations. The methods of BIBREF34 were extended in BIBREF35 to use a temporal embedding and to take more account of geographic context, in particular the distances between preferred and non-preferred neighboring POIs, to create a “geographically hierarchical pairwise preference ranking model”. Similarly, in BIBREF37 the CBOW model was trained with POI data. They ordered POIs spatially within the traffic-based zones of urban areas. The ordering was used to generate characteristic vectors of POI types. Zone vectors represented by averaging the vectors of the POIs contained in them, were then used as features to predict land use types. In the CrossMap method BIBREF38 they learned embeddings for spatio-temporal hotspots obtained from social media data of locations, times and text. In one form of embedding, intended to enable reconstruction of records, neighbourhood relations in space and time were encoded by averaging hotspots in a target location's spatial and temporal neighborhoods. They also proposed a graph-based embedding method with nodes of location, time and text. The concatenation of the location, time and text vectors were then used as features to predict peoples' activities in urban environments. Finally, in BIBREF39 , a method is proposed that uses the Skip-gram model to represent POI types, based on the intuition that the vector representing a given POI type should be predictive of the POI types that found near places of that type.
Our work is different from these studies, as our focus is on representing locations based on a given text description of that location (in the form of Flickr tags), along with numerical and categorical features from scientific datasets.
Analyzing Flickr tags
Many studies have focused on analyzing Flickr tags to extract useful information in domains such as linguistics BIBREF40 , geography BIBREF0 , BIBREF41 , and ecology BIBREF42 , BIBREF7 , BIBREF43 . Most closely related to our work, BIBREF7 found that the tags of georeferenced Flickr photos can effectively supplement traditional scientific environmental data in tasks such as predicting climate features, land cover, species occurrence, and human assessments of scenicness. To encode locations, they simply combine a bag-of-words representation of geographically nearby tags with a feature vector that encodes associated structured scientific data. They found that the predictive value of Flickr tags is roughly on a par with that of the scientific datasets, and that combining both types of information leads to significantly better results than using either of them alone. As we show in this paper, however, their straightforward way of combining both information sources, by concatenating the two types of feature vectors, is far from optimal.
Despite the proven importance of Flickr tags, the problem of embedding Flickr tags has so far received very limited attention. To the best of our knowledge, BIBREF44 is the only work that generated embeddings for Flickr tags. However, their focus was on learning embeddings that capture word meaning (being evaluated on word similarity tasks), whereas we use such embeddings as part of our method for representing locations.
Model Description
In this section, we introduce our embedding model, which combines Flickr tags and structured scientific information to represent a set of locations INLINEFORM0 . The proposed model has the following form: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters to control the importance of each component in the model. Component INLINEFORM2 will be used to constrain the representation of the locations based on their textual description (i.e. Flickr tags), INLINEFORM3 will be used to constrain the representation of the locations based on their numerical features, and INLINEFORM4 will impose the constraint that locations belonging to the same category should be close together in the space. We will discuss each of these components in more detail in the following sections.
Tag Based Location Embedding
Many of the tags associated with Flickr photos describe characteristics of the places where these photos were taken BIBREF45 , BIBREF46 , BIBREF47 . For example, tags may correspond to place names (e.g. Brussels, England, Scandinavia), landmarks (e.g. Eiffel Tower, Empire State Building) or land cover types (e.g. mountain, forest, beach). To allow us to build location models using such tags, we collected the tags and meta-data of 70 million Flickr photos with coordinates in Europe (which is the region our experiments will focus on), all of which were uploaded to Flickr before the end of September 2015. In this section we first explain how tags can be weighted to obtain bag-of-words representations of locations from Flickr. Subsequently we describe a tag selection method, which will allow us to specialize the embedding depending on which aspects of the considered locations are of interest, after which we discuss the actual embedding model.
Tag weighting. Let INLINEFORM0 be a set of geographic locations, each characterized by latitude and longitude coordinates. To generate a bag-of-words representation of a given location, we have to weight the relevance of each tag to that location. To this end, we have followed the weighting scheme from BIBREF7 , which combines a Gaussian kernel (to model spatial proximity) with Positive Pointwise Mutual Information (PPMI) BIBREF48 , BIBREF49 .
Let us write INLINEFORM0 for the set of users who have assigned tag INLINEFORM1 to a photo with coordinates near INLINEFORM2 . To assess how relevant INLINEFORM3 is to the location INLINEFORM4 , the number of times INLINEFORM5 occurs in photos near INLINEFORM6 is clearly an important criterion. However, rather than simply counting the number of occurrences within some fixed radius, we use a Gaussian kernel to weight the tag occurrences according to their distance from that location: INLINEFORM7
where the threshold INLINEFORM0 is assumed to be fixed, INLINEFORM1 is the location of a Flickr photo, INLINEFORM2 is the Haversine distance, and we will assume that the bandwidth parameter INLINEFORM3 is set to INLINEFORM4 . A tag occurrence is counted only once for all photos by the same user at the same location, which is important to reduce the impact of bulk uploading. The value INLINEFORM5 reflects how frequent tag INLINEFORM6 is near location INLINEFORM7 , but it does not yet take into account the total number of tag occurrences near INLINEFORM8 , nor how popular the tag INLINEFORM9 is overall. To measure how strongly tag INLINEFORM10 is associated with location INLINEFORM11 , we use PPMI, which is a commonly used measure of association in natural language processing. However, rather than estimating PPMI scores from term frequencies, we will use the INLINEFORM12 values instead: INLINEFORM13
where: INLINEFORM0
with INLINEFORM0 the set of all tags, and INLINEFORM1 the set of locations.
Tag selection. Inspired by BIBREF50 , we use a term selection method in order to focus on the tags that are most important for the tasks that we want to consider and reduce the impact of tags that might relate only to a given individual or a group of users. In particular, we obtained good results with a method based on Kullback-Leibler (KL) divergence, which is based on BIBREF51 . Let INLINEFORM0 be a set of (mutually exclusive) properties of locations in which we are interested (e.g. land cover categories). For the ease of presentation, we will identify INLINEFORM1 with the set of locations that have the corresponding property. Then, we select tags from INLINEFORM2 that maximize the following score: INLINEFORM3
where INLINEFORM0 is the probability that a photo with tag INLINEFORM1 has a location near INLINEFORM2 and INLINEFORM3 is the probability that an arbitrary tag occurrence is assigned to a photo near a location in INLINEFORM4 . Since INLINEFORM5 often has to be estimated from a small number of tag occurrences, it is estimated using Bayesian smoothing: INLINEFORM6
where INLINEFORM0 is a parameter controlling the amount of smoothing, which will be tuned in the experiments. On the other hand, for INLINEFORM1 we can simply use a maximum likelihood estimation: INLINEFORM2
Location embedding. We now want to find a vector INLINEFORM0 for each location INLINEFORM1 such that similar locations are represented using similar vectors. To achieve this, we use a close variant of the GloVe model, where tag occurrences are treated as context words of geographic locations. In particular, with each location INLINEFORM2 we associate a vector INLINEFORM3 and with each tag INLINEFORM4 we associate a vector INLINEFORM5 and a bias term INLINEFORM6 , and consider the following objective (which in our full model ( EQREF7 ) will be combined with components that are derived from the structured information): INLINEFORM7
Note how tags play the role of the context words in the GloVe model, while instead of learning target word vectors we now learn location vectors. In contrast to GloVe, our objective does not directly refer to co-occurrence statistics, but instead uses the INLINEFORM0 scores. One important consequence of this is that we can also consider pairs INLINEFORM1 for which INLINEFORM2 does not occur in INLINEFORM3 at all; such pairs are usually called negative examples. While they cannot be used in the standard GloVe model, some authors have already reported that introducing negative examples in variants of GloVe can lead to an improvement BIBREF52 . In practice, evaluating the full objective above would not be computationally feasible, as we may need to consider millions of locations and millions of tags. Therefore, rather than considering all tags in INLINEFORM4 for the inner summation, we only consider those tags that appear at least once near location INLINEFORM5 together with a sample of negative examples.
Structured Environmental Data
There is a wide variety of structured data that can be used to describe locations. In this work, we have restricted ourselves to the same datasets as BIBREF7 . These include nine (real-valued) numerical features, which are latitude, longitude, elevation, population, and five climate related features (avg. temperature, avg. precipitation, avg. solar radiation, avg. wind speed, and avg. water vapor pressure). In addition, 180 categorical features were used, which are CORINE land cover classes at level 1 (5 classes), level 2 (15 classes) and level 3 (44 classes) and 116 soil types (SoilGrids). Note that each location should belong to exactly 4 categories: one CORINE class at each of the three levels and a soil type.
Numerical features. Numerical features can be treated similarly to the tag occurrences, i.e. we will assume that the value of a given numerical feature can be predicted from the location vectors using a linear mapping. In particular, for each numerical feature INLINEFORM0 we consider a vector INLINEFORM1 and a bias term INLINEFORM2 , and the following objective: INLINEFORM3
where we write INLINEFORM0 for set of all numerical features and INLINEFORM1 is the value of feature INLINEFORM2 for location INLINEFORM3 , after z-score normalization.
Categorical features. To take into account the categorical features, we impose the constraint that locations belonging to the same category should be close together in the space. To formalize this, we represent each category type INLINEFORM0 as a vector INLINEFORM1 , and consider the following objective: INLINEFORM2
Evaluation Tasks
We will use the method from BIBREF7 as our main baseline. This will allow us to directly evaluate the effectiveness of embeddings for the considered problem, since we have used the same structured datasets and same tag weighting scheme. For this reason, we will also follow their evaluation methodology. In particular, we will consider three evaluation tasks:
Predicting the distribution of 100 species across Europe, using the European network of nature protected sites Natura 2000 dataset as ground truth. For each of these species, a binary classification problem is considered. The set of locations INLINEFORM0 is defined as the 26,425 distinct sites occurring in the dataset.
Predicting soil type, again each time treating the task as a binary classification problem, using the same set of locations INLINEFORM0 as in the species distribution experiments. For these experiments, none of the soil type features are used for generating the embeddings.
Predicting CORINE land cover classes at levels 1, 2 and level 3, each time treating the task as a binary classification problem, using the same set of locations INLINEFORM0 as in the species distribution experiments. For these experiments, none of the CORINE features are used for generating the embeddings.
In addition, we will also consider the following regression tasks:
Predicting 5 climate related features: the average precipitation, temperature, solar radiation, water vapor pressure, and wind speed. We again use the same set of locations INLINEFORM0 as for species distribution in this experiment. None of the climate features is used for constructing the embeddings for this experiment.
Predicting people's subjective opinions of landscape beauty in Britain, using the crowdsourced dataset from the ScenicOrNot website as ground truth. The set INLINEFORM0 is chosen as the set of locations of 191 605 rated locations from the ScenicOrNot dataset for which at least one georeferenced Flickr photo exists within a 1 km radius.
Experimental Setup
In all experiments, we use Support Vector Machines (SVMs) for classification problems and Support Vector Regression (SVR) for regression problems to make predictions from our representations of geographic locations. In both cases, we used the SVM INLINEFORM0 implementation BIBREF53 . For each experiment, the set of locations INLINEFORM1 was split into two-thirds for training, one-sixth for testing, and one-sixth for tuning the parameters. All embedding models are learned with Adagrad using 30 iterations. The number of dimensions is chosen for each experiment from INLINEFORM2 based on the tuning data. For the parameters of our model in Equation EQREF7 , we considered values of INLINEFORM3 from {0.1, 0.01, 0.001, 0.0001} and values of INLINEFORM4 from {1, 10, 100, 1000, 10 000, 100 000}. To compute KL divergence, we need to determine a set of classes INLINEFORM5 for each experiment. For classification problems, we can simply consider the given categories, but for the regression problems we need to define such classes by discretizing the numerical values. For the scenicness experiments, we considered scores 3 and 7 as cut-off points, leading to three classes (i.e. less than 3, between 3 and 7, and above 7). Similarly, for each climate related features, we consider two cut-off values for discretization: 5 and 15 for average temperature, 50 and 100 for average precipitation, 10 000 and 17 000 for average solar radiation, 0.7 and 1 for average water vapor pressure, and 3 and 5 for wind speed. The smoothing parameter INLINEFORM6 was selected among INLINEFORM7 based on the tuning data. In all experiments where term selection is used, we select the top 100 000 tags. We fixed the radius INLINEFORM8 at 1km when counting the number of tag occurrences. Finally, we set the number of negative examples as 10 times the number of positive examples for each location, but with a cap at 1000 negative examples in each region for computational reasons. We tune all parameters with respect to the F1 score for the classification tasks, and Spearman INLINEFORM9 for the regression tasks.
Variants and Baseline Methods
We will refer to our model as EGEL (Embedding GEographic Locations), and will consider the following variants. EGEL-Tags only uses the information from the Flickr tags (i.e. component INLINEFORM0 ), without using any negative examples and without feature selection. EGEL-Tags+NS is similar to EGEL-Tags but with the addition of negative examples. EGEL-KL(Tags+NS) additionally considers term selection. EGEL-All is our full method, i.e. it additionally uses the structured information. We also consider the following baselines. BOW-Tags represents locations using a bag-of-words representation, using the same tag weighting as the embedding model. BOW-KL(Tags) uses the same representation but after term selection, using the same KL-based method as the embedding model. BOW-All combines the bag-of-words representation with the structured information, encoded as proposed in BIBREF7 . GloVe uses the objective from the original GloVe model for learning location vectors, i.e. this variant differs from EGEL-Tags in that instead of INLINEFORM1 we use the number of co-occurrences of tag INLINEFORM2 near location INLINEFORM3 , measured as INLINEFORM4 .
Results and Discussion
We present our results for the binary classification tasks in Tables TABREF23 – TABREF24 in terms of average precision, average recall and macro average F1 score. The results of the regression tasks are reported in Tables TABREF25 and TABREF29 in terms of the mean absolute error between the predicted and actual scores, as well as the Spearman INLINEFORM0 correlation between the rankings induced by both sets of scores. It can be clearly seen from the results that our proposed method (EGEL-All) can effectively integrate Flickr tags with the available structured information. It outperforms the baselines for all the considered tasks. Furthermore, note that the PPMI-based weighting in EGEL-Tags consistently outperforms GloVe and that both the addition of negative examples and term selection lead to further improvements. The use of term selection leads to particularly substantial improvements for the regression problems.
While our experimental results confirm the usefulness of embeddings for predicting environmental features, this is only consistently the case for the variants that use both the tags and the structured datasets. In particular, comparing BOW-Tags with EGEL-Tags, we sometimes see that the former achieves the best results. While this might seem surprising, it is in accordance with the findings in BIBREF54 , BIBREF38 , among others, where it was also found that bag-of-words representations can sometimes lead to surprisingly effective baselines. Interestingly, we note that in all cases where EGEL-KL(Tags+NS) performs worse than BOW-Tags, we also find that BOW-KL(Tags) performs worse than BOW-Tags. This suggests that for these tasks there is a very large variation in the kind of tags that can inform the prediction model, possibly including e.g. user-specific tags. Some of the information captured by such highly specific but rare tags is likely to be lost in the embedding.
To further analyze the difference in performance between BoW representations and embeddings, Figure TABREF29 compares the performance of the GloVe model with the bag-of-words model for predicting place scenicness, as a function of the number of tag occurrences at the considered locations. What is clearly noticeable in Figure TABREF29 is that GloVe performs better than the bag-of-words model for large corpora and worse for smaller corpora. This issue has been alleviated in our embedding method by the addition of negative examples.
Conclusions
In this paper, we have proposed a model to learn geographic location embeddings using Flickr tags, numerical environmental features, and categorical information. The experimental results show that our model can integrate Flickr tags with structured information in a more effective way than existing methods, leading to substantial improvements over baseline methods on various prediction tasks about the natural environment.
Acknowledgments
Shelan Jeawak has been sponsored by HCED Iraq. Steven Schockaert has been supported by ERC Starting Grant 637277.
|
what dataset is used in this paper?
|
the same datasets as BIBREF7
| 4,661
|
qasper
|
8k
|
Introduction
During the first two decades of the 21st century, the sharing and processing of vast amounts of data has become pervasive. This expansion of data sharing and processing capabilities is both a blessing and a curse. Data helps build better information systems for the digital era and enables further research for advanced data management that benefits the society in general. But the use of this very data containing sensitive information conflicts with private data protection, both from an ethical and a legal perspective.
There are several application domains on which this situation is particularly acute. This is the case of the medical domain BIBREF0. There are plenty of potential applications for advanced medical data management that can only be researched and developed using real data; yet, the use of medical data is severely limited –when not entirely prohibited– due to data privacy protection policies.
One way of circumventing this problem is to anonymise the data by removing, replacing or obfuscating the personal information mentioned, as exemplified in Table TABREF1. This task can be done by hand, having people read and anonymise the documents one by one. Despite being a reliable and simple solution, this approach is tedious, expensive, time consuming and difficult to scale to the potentially thousands or millions of documents that need to be anonymised.
For this reason, numerous of systems and approaches have been developed during the last decades to attempt to automate the anonymisation of sensitive content, starting with the automatic detection and classification of sensitive information. Some of these systems rely on rules, patterns and dictionaries, while others use more advanced techniques related to machine learning and, more recently, deep learning.
Given that this paper is concerned with text documents (e.g. medical records), the involved techniques are related to Natural Language Processing (NLP). When using NLP approaches, it is common to pose the problem of document anonymisation as a sequence labelling problem, i.e. classifying each token within a sequence as being sensitive information or not. Further, depending on the objective of the anonymisation task, it is also important to determine the type of sensitive information (names of individuals, addresses, age, sex, etc.).
The anonymisation systems based on NLP techniques perform reasonably well, but are far from perfect. Depending on the difficulty posed by each dataset or the amount of available data for training machine learning models, the performance achieved by these methods is not enough to fully rely on them in certain situations BIBREF0. However, in the last two years, the NLP community has reached an important milestone thanks to the appearance of the so-called Transformers neural network architectures BIBREF1. In this paper, we conduct several experiments in sensitive information detection and classification on Spanish clinical text using BERT (from `Bidirectional Encoder Representations from Transformers') BIBREF2 as the base for a sequence labelling approach. The experiments are carried out on two datasets: the MEDDOCAN: Medical Document Anonymization shared task dataset BIBREF3, and NUBes BIBREF4, a corpus of real medical reports in Spanish. In these experiments, we compare the performance of BERT with other machine-learning-based systems, some of which use language-specific features. Our aim is to evaluate how good a BERT-based model performs without language nor domain specialisation apart from the training data labelled for the task at hand.
The rest of the paper is structured as follows: the next section describes related work about data anonymisation in general and clinical data anonymisation in particular; it also provides a more detailed explanation and background about the Transformers architecture and BERT. Section SECREF3 describes the data involved in the experiments and the systems evaluated in this paper, including the BERT-based system; finally, it details the experimental design. Section SECREF4 introduces the results for each set of experiments. Finally, Section SECREF5 contains the conclusions and future lines of work.
Related Work
The state of the art in the field of Natural Language Processing (NLP) has reached an important milestone in the last couple of years thanks to deep-learning architectures, increasing in several points the performance of new models for almost any text processing task.
The major change started with the Transformers model proposed by vaswani2017attention. It substituted the widely used recurrent and convolutional neural network architectures by another approach based solely on self-attention, obtaining an impressive performance gain. The original proposal was focused on an encoder-decoder architecture for machine translation, but soon the use of Transformers was made more general BIBREF1. There are several other popular models that use Transformers, such as Open AI's GPT and GPT2 BIBREF5, RoBERTa BIBREF6 and the most recent XLNet BIBREF7; still, BERT BIBREF2 is one of the most widespread Transformer-based models.
BERT trains its unsupervised language model using a Masked Language Model and Next Sentence Prediction. A common problem in NLP is the lack of enough training data. BERT can be pre-trained to learn general or specific language models using very large amounts of unlabelled text (e.g. web content, Wikipedia, etc.), and this knowledge can be transferred to a different downstream task in a process that receives the name fine-tuning.
devlin2018bert have used fine-tuning to achieve state-of-the-art results on a wide variety of challenging natural language tasks, such as text classification, Question Answering (QA) and Named Entity Recognition and Classification (NERC). BERT has also been used successfully by other community practitioners for a wide range of NLP-related tasks BIBREF8, BIBREF9.
Regarding the task of data anonymisation in particular, anonymisation systems may follow different approaches and pursue different objectives (Cormode and Srivastava, 2009). The first objective of these systems is to detect and classify the sensitive information contained in the documents to be anonymised. In order to achieve that, they use rule-based approaches, Machine Learning (ML) approaches, or a combination of both.
Although most of these efforts are for English texts –see, among others, the i2b2 de-identification challenges BIBREF10, BIBREF11, dernon2016deep, or khin2018deep–, other languages are also attracting growing interest. Some examples are mamede2016automated for Portuguese and tveit2004anonymization for Norwegian. With respect to the anonymisation of text written in Spanish, recent studies include medina2018building, hassan2018anonimizacion and garcia2018automating. Most notably, in 2019 the first community challenge about anonymisation of medical documents in Spanish, MEDDOCAN BIBREF3, was held as part of the IberLEF initiative. The winners of the challenge –the Neither-Language-nor-Domain-Experts (NLNDE) BIBREF12– achieved F1-scores as high as 0.975 in the task of sensitive information detection and categorisation by using recurrent neural networks with Conditional Random Field (CRF) output layers.
At the same challenge, mao2019hadoken occupied the 8th position among 18 participants using BERT. According to the description of the system, the authors used BERT-Base Multilingual Cased and an output CRF layer. However, their system is $\sim $3 F1-score points below our implementation without the CRF layer.
Materials and Methods
The aim of this paper is to evaluate BERT's multilingual model and compare it to other established machine-learning algorithms in a specific task: sensitive data detection and classification in Spanish clinical free text. This section describes the data involved in the experiments and the systems evaluated. Finally, we introduce the experimental setup.
Materials and Methods ::: Data
Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format. In order to feed the data to the different algorithms presented in Section SECREF7, these datasets were transformed to comply with the commonly used BIO sequence representation scheme BIBREF14.
Materials and Methods ::: Data ::: NUBes-PHI
NUBes BIBREF4 is a corpus of around 7,000 real medical reports written in Spanish and annotated with negation and uncertainty information. Before being published, sensitive information had to be manually annotated and replaced for the corpus to be safely shared. In this article, we work with the NUBes version prior to its anonymisation, that is, with the manual annotations of sensitive information. It follows that the version we work with is not publicly available and, due to contractual restrictions, we cannot reveal the provenance of the data. In order to avoid confusion between the two corpus versions, we henceforth refer to the version relevant in this paper as NUBes-PHI (from `NUBes with Personal Health Information').
NUBes-PHI consists of 32,055 sentences annotated for 11 different sensitive information categories. Overall, it contains 7,818 annotations. The corpus has been randomly split into train (72%), development (8%) and test (20%) sets to conduct the experiments described in this paper. The size of each split and the distribution of the annotations can be consulted in Tables and , respectively.
The majority of sensitive information in NUBes-PHI are temporal expressions (`Date' and `Time'), followed by healthcare facility mentions (`Hospital'), and the age of the patient. Mentions of people are not that frequent, with physician names (`Doctor') occurring much more often than patient names (`Patient'). The least frequent sensitive information types, which account for $\sim $10% of the remaining annotations, consist of the patient's sex, job, and kinship, and locations other than healthcare facilities (`Location'). Finally, the tag `Other' includes, for instance, mentions to institutions unrelated to healthcare and whether the patient is right- or left-handed. It occurs just 36 times.
Materials and Methods ::: Data ::: The MEDDOCAN corpus
The organisers of the MEDDOCAN shared task BIBREF3 curated a synthetic corpus of clinical cases enriched with sensitive information by health documentalists. In this regard, the MEDDOCAN evaluation scenario could be said to be somewhat far from the real use case the technology developed for the shared task is supposed to be applied in. However, at the moment it also provides the only public means for a rigorous comparison between systems for sensitive health information detection in Spanish texts.
The size of the MEDDOCAN corpus is shown in Table . Compared to NUBes-PHI (Table ), this corpus contains more sensitive information annotations, both in absolute and relative terms.
The sensitive annotation categories considered in MEDDOCAN differ in part from those in NUBes-PHI. Most notably, it contains finer-grained labels for location-related mentions –namely, `Address', `Territory', and `Country'–, and other sensitive information categories that we did not encounter in NUBes-PHI (e.g., identifiers, phone numbers, e-mail addresses, etc.). In total, the MEDDOCAN corpus has 21 sensitive information categories. We refer the reader to the organisers' article BIBREF3 for more detailed information about this corpus.
Materials and Methods ::: Systems
Apart from experimenting with a pre-trained BERT model, we have run experiments with other systems and baselines, to compare them and obtain a better perspective about BERT's performance in these datasets.
Materials and Methods ::: Systems ::: Baseline
As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estadística (INE; Spanish Statistical Office).
Materials and Methods ::: Systems ::: CRF
Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false. The features extracted from each token are as follows:
[noitemsep]
prefixes and suffixes of 2 and 3 characters;
the length of the token in characters and the length of the sentence in tokens;
whether the token is all-letters, a number, or a sequence of punctuation marks;
whether the token contains the character `@';
whether the token is the start or end of the sentence;
the token's casing and the ratio of uppercase characters, digits, and punctuation marks to its length;
and, the lemma, part-of-speech tag, and named-entity tag given by ixa-pipes BIBREF16 upon analysing the sentence the token belongs to.
Noticeably, none of the features used to train the CRF classifier is domain-dependent. However, the latter group of features is language dependent.
Materials and Methods ::: Systems ::: spaCy
spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels. For this purpose, the new model uses all the labels of the training corpus coded with its context at sentence level. The network optimisation parameters and dropout values are the ones recommended in the documentation for small datasets. Finally, the model is trained using batches of size 64. No more features are included, so the classifier is language-dependent but not domain-dependent.
Materials and Methods ::: Systems ::: BERT
As introduced earlier, BERT has shown an outstanding performance in NERC-like tasks, improving the start-of-the-art results for almost every dataset and language. We take the same approach here, by using the model BERT-Base Multilingual Cased with a Fully Connected (FC) layer on top to perform a fine-tuning of the whole model for an anonymisation task in Spanish clinical data. Our implementation is built on PyTorch and the PyTorch-Transformers library BIBREF1. The training phase consists in the following steps (roughly depicted in Figure ):
Pre-processing: since we are relying on a pre-trained BERT model, we must match the same configuration by using a specific tokenisation and vocabulary. BERT also needs that the inputs contains special tokens to signal the beginning and the end of each sequence.
Fine-tuning: the pre-processed sequence is fed into the model. BERT outputs the contextual embeddings that encode each of the inputted tokens. This embedding representation for each token is fed into the FC linear layer after a dropout layer (with a 0.1 dropout probability), which in turn outputs the logits for each possible class. The cross-entropy loss function is calculated comparing the logits and the gold labels, and the error is back-propagated to adjust the model parameters.
We have trained the model using an AdamW optimiser BIBREF17 with the learning rate set to 3e-5, as recommended by devlin2018bert, and with a gradient clipping of 1.0. We also applied a learning-rate scheduler that warms up the learning rate from zero to its maximum value as the training progresses, which is also a common practice. For each experiment set proposed below, the training was run with an early-stopping patience of 15 epochs. Then, the model that performed best against the development set was used to produce the reported results.
The experiments were run on a 64-core server with operating system Ubuntu 16.04, 250GB of RAM memory, and 4 GeForce RTX 2080 GPUs with 11GB of memory. The maximum sequence length was set at 500 and the batch size at 12. In this setting, each epoch –a full pass through all the training data– required about 10 minutes to complete.
Materials and Methods ::: Experimental design
We have conducted experiments with BERT in the two datasets of Spanish clinical narrative presented in Section SECREF3 The first experiment set uses NUBes-PHI, a corpus of real medical reports manually annotated with sensitive information. Because this corpus is not publicly available, and in order to compare the BERT-based model to other related published systems, the second set of experiments uses the MEDDOCAN 2019 shared task competition dataset. The following sections provide greater detail about the two experimental setups.
Materials and Methods ::: Experimental design ::: Experiment A: NUBes-PHI
In this experiment set, we evaluate all the systems presented in Section SECREF7, namely, the rule-based baseline, the CRF classifier, the spaCy entity tagger, and BERT. The evaluation comprises three scenarios of increasing difficulty:
[noitemsep]
- Evaluates the performance of the systems at predicting whether each token is sensitive or non-sensitive; that is, the measurements only take into account whether a sensitive token has been recognised or not, regardless of the BIO label and the category assigned. This scenario shows how good a system would be at obfuscating sensitive data (e.g., by replacing sensitive tokens with asterisks).
- We measure the performance of the systems at predicting the sensitive information type of each token –i.e., the 11 categories presented in Section SECREF5 or `out'. Detecting entity types correctly is important if a system is going to be used to replace sensitive data by fake data of the same type (e.g., random people names).
- This is the strictest evaluation, as it takes into account both the BIO label and the category assigned to each individual token. Being able to discern between two contiguous sensitive entities of the same type is relevant not only because it is helpful when producing fake replacements, but because it also yields more accurate statistics of the sensitive information present in a given document collection.
The systems are evaluated in terms of micro-average precision, recall and F1-score in all the scenarios.
In addition to the scenarios proposed, a subject worth being studied is the need of labelled data. Manually labelled data is an scarce and expensive resource, which for some application domains or languages is difficult to come by. In order to obtain an estimation of the dependency of each system on the available amount of training data, we have retrained all the compared models using decreasing amounts of data –from 100% of the available training instances to just 1%. The same data subsets have been used to train all the systems. Due to the knowledge transferred from the pre-trained BERT model, the BERT-based model is expected to be more robust to data scarcity than those that start their training from scratch.
Materials and Methods ::: Experimental design ::: Experiment B: MEDDOCAN
In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:
[noitemsep]
- This evaluation measures how good a system is at detecting sensitive text spans, regardless of the category assigned to them.
- In this scenario, systems are required to match exactly not only the boundaries of each sensitive span, but also the category assigned.
The systems are evaluated in terms of micro-averaged precision, recall and F-1 score. Note that, in contrast to the evaluation in Experiment A, MEDDOCAN measurements are entity-based instead of tokenwise. An exhaustive explanation of the MEDDOCAN evaluation procedure is available online, as well as the official evaluation script, which we used to obtain the reported results.
Results
This section describes the results obtained in the two sets of experiments: NUBes-PHI and MEDDOCAN.
Results ::: Experiment A: NUBes-PHI
Table shows the results of the conducted experiments in NUBes-PHI for all the compared systems. The included baseline serves to give a quick insight about how challenging the data is. With simple regular expressions and gazetteers a precision of 0.853 is obtained. On the other hand, the recall, which directly depends on the coverage provided by the rules and resources, drops to 0.469. Hence, this task is unlikely to be solved without the generalisation capabilities provided by machine-learning and deep-learning models.
Regarding the detection scenario –that is, the scenario concerned with a binary classification to determine whether each individual token conveys sensitive information or not–, it can be observed that BERT outperforms its competitors. A fact worth highlighting is that, according to these results, BERT achieves a precision lower than the rest of the systems (i.e., it makes more false positive predictions); in exchange, it obtains a remarkably higher recall. Noticeably, it reaches a recall of 0.979, improving by more than 4 points the second-best system, spaCy.
The table also shows the results for the relaxed metric that only takes into account the entity type detected, regardless of the BIO label (i.e., ignoring whether the token is at the beginning or in the middle of a sensitive sequence of tokens). The conclusions are very similar to those extracted previously, with BERT gaining 2.1 points of F1-score over the CRF based approach. The confusion matrices of the predictions made by CRF, spaCy, and BERT in this scenario are shown in Table . As can bee seen, BERT has less difficulty in predicting correctly less frequent categories, such as `Location', `Job', and `Patient'. One of the most common mistakes according to the confusion matrices is classifying hospital names as `Location' instead of the more accurate `Hospital'; this is hardly a harmful error, given that a hospital is actually a location. Last, the category `Other' is completely leaked by all the compared systems, most likely due to its almost total lack of support in both training and evaluation datasets.
To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive result in the comparison. More remarkably, the recall obtained by BERT is about 5 points above.
Upon manual inspection of the errors committed by the BERT-based model, we discovered that it has a slight tendency towards producing ill-formed BIO sequences (e.g, starting a sensitive span with `Inside' instead of `Begin'; see Table ). We could expect that complementing the BERT-based model with a CRF layer on top would help enforce the emission of valid sequences, alleviating this kind of errors and further improving its results.
Finally, Figure shows the impact of decreasing the amount of training data in the detection scenario. It shows the difference in precision, recall, and F1-score with respect to that obtained using 100% of the training data. A general downward trend can be observed, as one would expect: less training data leads to less accurate predictions. However, the BERT-based model is the most robust to training-data reduction, showing an steadily low performance loss. With 1% of the dataset (230 training instances), the BERT-based model only suffers a striking 7-point F1-score loss, in contrast to the 32 and 39 points lost by the CRF and spaCy models, respectively. This steep performance drop stems to a larger extent from recall decline, which is not that marked in the case of BERT. Overall, these results indicate that the transfer-learning achieved through the BERT multilingual pre-trained model not only helps obtain better results, but also lowers the need of manually labelled data for this application domain.
Results ::: Experiment B: MEDDOCAN
The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.
With regard to the winner of the MEDDOCAN shared task, the BERT-based model has not improved the scores obtained by neither the domain-dependent (S3) nor the domain-independent (S2) NLNDE model. However, attending to the obtained results, BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, the task can be considered almost solved, and it is not clear if the differences among the systems are actually significant, or whether they stem from minor variations in initialisation or a long-tail of minor labelling inconsistencies.
Conclusions and Future Work
In this work we have briefly introduced the problems related to data privacy protection in clinical domain. We have also described some of the groundbreaking advances on the Natural Language Processing field due to the appearance of Transformers-based deep-learning architectures and transfer learning from very large general-domain multilingual corpora, focusing our attention in one of its most representative examples, Google's BERT model.
In order to assess the performance of BERT for Spanish clinical data anonymisation, we have conducted several experiments with a BERT-based sequence labelling approach using the pre-trained multilingual BERT model shared by Google as the starting point for the model training. We have compared this BERT-based sequence labelling against other methods and systems. One of the experiments uses the MEDDOCAN 2019 shared task dataset, while the other uses a novel Spanish clinical reports dataset called NUBes-PHI.
The results of the experiments show that, in NUBes-PHI, the BERT-based model outperforms the other systems without requiring any adaptation or domain-specific feature engineering, just by being trained on the provided labelled data. Interestingly, the BERT-based model obtains a remarkably higher recall than the other systems. High recall is a desirable outcome because, when anonymising sensible documents, the accidental leak of sensible data is likely to be more dangerous than the unintended over-obfuscation of non-sensitive text.
Further, we have conducted an additional experiment on this dataset by progressively reducing the training data for all the compared systems. The BERT-based model shows the highest robustness to training-data scarcity, loosing only 7 points of F1-score when trained on 230 instances instead of 21,371. These observation are in line with the results obtained by the NLP community using BERT for other tasks.
The experiments with the MEDDOCAN 2019 shared task dataset follow the same pattern. In this case, the BERT-based model falls 0.3 F1-score points behind the shared task winning system, but it would have achieved the second position in the competition with no further refinement.
Since we have used a pre-trained multilingual BERT model, the same approach is likely to work for other languages just by providing some labelled training data. Further, this is the simplest fine-tuning that can be performed based on BERT. More sophisticated fine-tuning layers could help improve the results. For example, it could be expected that a CRF layer helped enforce better BIO tagging sequence predictions. Precisely, mao2019hadoken participated in the MEDDOCAN competition using a BERT+CRF architecture, but their reported scores are about 3 points lower than our implementation. From the description of their work, it is unclear what the source of this score difference could be.
Further, at the time of writing this paper, new multilingual pre-trained models and Transformer architectures have become available. It would not come as a surprise that these new resources and systems –e.g., XLM-RoBERTa BIBREF19 or BETO BIBREF20, a BERT model fully pre-trained on Spanish texts– further advanced the state of the art in this task.
Acknowledgements
This work has been supported by Vicomtech and partially funded by the project DeepReading (RTI2018-096846-B-C21, MCIU/AEI/FEDER,UE).
|
What are the clinical datasets used in the paper?
|
MEDDOCAN, NUBes-PHI
| 4,528
|
qasper
|
8k
|
Introduction
Chatbots such as dialog and question-answering systems have a long history in AI and natural language processing. Early such systems were mostly built using markup languages such as AIML, handcrafted conversation generation rules, and/or information retrieval techniques BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Recent neural conversation models BIBREF4 , BIBREF5 , BIBREF6 are even able to perform open-ended conversations. However, since they do not use explicit knowledge bases and do not perform inference, they often suffer from generic and dull responses BIBREF5 , BIBREF7 . More recently, BIBREF8 and BIBREF9 proposed to use knowledge bases (KBs) to help generate responses for knowledge-grounded conversation. However, one major weakness of all existing chat systems is that they do not explicitly or implicitly learn new knowledge in the conversation process. This seriously limits the scope of their applications. In contrast, we humans constantly learn new knowledge in our conversations. Even if some existing systems can use very large knowledge bases either harvested from a large data source such as the Web or built manually, these KBs still miss a large number of facts (knowledge) BIBREF10 . It is thus important for a chatbot to continuously learn new knowledge in the conversation process to expand its KB and to improve its conversation ability.
In recent years, researchers have studied the problem of KB completion, i.e., inferring new facts (knowledge) automatically from existing facts in a KB. KB completion (KBC) is defined as a binary classification problem: Given a query triple, ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ), we want to predict whether the source entity INLINEFORM3 and target entity INLINEFORM4 can be linked by the relation INLINEFORM5 . However, existing approaches BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 solve this problem under the closed-world assumption, i.e., INLINEFORM6 , INLINEFORM7 and INLINEFORM8 are all known to exist in the KB. This is a major weakness because it means that no new knowledge or facts may contain unknown entities or relations. Due to this limitation, KBC is clearly not sufficient for knowledge learning in conversations because in a conversation, the user can say anything, which may contain entities and relations that are not already in the KB.
In this paper, we remove this assumption of KBC, and allow all INLINEFORM0 , INLINEFORM1 and INLINEFORM2 to be unknown. We call the new problem open-world knowledge base completion (OKBC). OKBC generalizes KBC. Below, we show that solving OKBC naturally provides the ground for knowledge learning and inference in conversations. In essence, we formulate an abstract problem of knowledge learning and inference in conversations as a well-defined OKBC problem in the interactive setting.
From the perspective of knowledge learning in conversations, essentially we can extract two key types of information, true facts and queries, from the user utterances. Queries are facts whose truth values need to be determined. Note that we do not study fact or relation extraction in this paper as there is an extensive work on the topic. (1) For a true fact, we will incorporate it into the KB. Here we need to make sure that it is not already in the KB, which involves relation resolution and entity linking. After a fact is added to the KB, we may predict that some related facts involving some existing relations in the KB may also be true (not logical implications as they can be automatically inferred). For example, if the user says “Obama was born in USA,” the system may guess that (Obama, CitizenOf, USA) (meaning that Obama is a citizen of USA) could also be true based on the current KB. To verify this fact, it needs to solve a KBC problem by treating (Obama, CitizenOf, USA) as a query. This is a KBC problem because the fact (Obama, BornIn, USA) extracted from the original sentence has been added to the KB. Then Obama and USA are in the KB. If the KBC problem is solved, it learns a new fact (Obama, CitizenOf, USA) in addition to the extracted fact (Obama, BornIn, USA). (2) For a query fact, e.g., (Obama, BornIn, USA) extracted from the user question “Was Obama born in USA?” we need to solve the OKBC problem if any of “Obama, “BornIn”, or “USA" is not already in the KB.
We can see that OKBC is the core of a knowledge learning engine for conversation. Thus, in this paper, we focus on solving it. We assume that other tasks such as fact/relation extraction and resolution and guessing of related facts of an extracted fact are solved by other sub-systems.
We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:
This setting is ideal for many NLP applications like dialog and question-answering systems that naturally provide the scope for human interaction and demand real-time inference.
LiLi starts with the closed-world KBC approach path-ranking (PR) BIBREF11 , BIBREF17 and extends KBC in a major way to open-world knowledge base completion (OKBC). For a relation INLINEFORM0 , PR works by enumerating paths (except single-link path INLINEFORM1 ) between entity-pairs linked by INLINEFORM2 in the KB and use them as features to train a binary classifier to predict whether a query INLINEFORM3 should be in the KB. Here, a path between two entities is a sequence of relations linking them. In our work, we adopt the latest PR method, C-PR BIBREF16 and extend it to make it work in the open-world setting. C-PR enumerates paths by performing bidirectional random walks over the KB graph while leveraging the context of the source-target entity-pair. We also adopt and extend the compositional vector space model BIBREF20 , BIBREF21 with continual learning capability for prediction.
Given an OKBC query ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (e.g., “(Obama, CitizenOf, USA), which means whether Obama a citizen of USA), LiLi interacts with the user (if needed) by dynamically formulating questions (see the interaction example in Figure 1, which will be further explained in §3) and leverages the interactively acquired knowledge (supporting facts (SFs) in the figure) for continued inference. To do so, LiLi formulates a query-specific inference strategy and executes it. We design LiLi in a Reinforcement Learning (RL) setting that performs sub-tasks like formulating and executing strategy, training a prediction model for inference, and knowledge retention for future use. To the best of our knowledge, our work is the first to address the OKBC problem and to propose an interactive learning mechanism to solve it in a continuous or lifelong manner. We empirically verify the effectiveness of LiLi on two standard real-world KBs: Freebase and WordNet. Experimental results show that LiLi is highly effective in terms of its predictive performance and strategy formulation ability.
Related Work
To the best of our knowledge, we are not aware of any knowledge learning system that can learn new knowledge in the conversation process. This section thus discusses other related work.
Among existing KB completion approaches, BIBREF20 extended the vector space model for zero-shot KB inference. However, the model cannot handle unknown entities and can only work on fixed set of unknown relations with known embeddings. Recently, BIBREF22 proposed a method using external text corpus to perform inference on unknown entities. However, the method cannot handle unknown relations. Thus, these methods are not suitable for our open-world setting. None of the existing KB inference methods perform interactive knowledge learning like LiLi. NELL BIBREF23 continuously updates its KB using facts extracted from the Web. Our task is very different as we do not do Web fact extraction (which is also useful). We focus on user interactions in this paper. Our work is related to interactive language learning (ILL) BIBREF24 , BIBREF25 , but these are not about KB completion. The work in BIBREF26 allows a learner to ask questions in dialogue. However, this work used RL to learn about whether to ask the user or not. The “what to ask aspect" was manually designed by modeling synthetic tasks. LiLi formulates query-specific inference strategies which embed interaction behaviors. Also, no existing dialogue systems BIBREF4 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 employ lifelong learning to train prediction models by using information/knowledge retained in the past.
Our work is related to general lifelong learning in BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . However, they learn only one type of tasks, e.g., supervised, topic modeling or reinforcement learning (RL) tasks. None of them is suitable for our setting, which involves interleaving of RL, supervised and interactive learning. More details about lifelong learning can be found in the book BIBREF31 .
Interactive Knowledge Learning (LiLi)
We design LiLi as a combination of two interconnected models: (1) a RL model that learns to formulate a query-specific inference strategy for performing the OKBC task, and (2) a lifelong prediction model to predict whether a triple should be in the KB, which is invoked by an action while executing the inference strategy and is learned for each relation as in C-PR. The framework improves its performance over time through user interaction and knowledge retention. Compared to the existing KB inference methods, LiLi overcomes the following three challenges for OKBC:
1. Mapping open-world to close-world. Being a closed-world method, C-PR cannot extract path features and learn a prediction model when any of INLINEFORM0 , INLINEFORM1 or INLINEFORM2 is unknown. LiLi solves this problem through interactive knowledge acquisition. If INLINEFORM3 is unknown, LiLi asks the user to provide a clue (an example of INLINEFORM4 ). And if INLINEFORM5 or INLINEFORM6 is unknown, LiLi asks the user to provide a link (relation) to connect the unknown entity with an existing entity (automatically selected) in the KB. We refer to such a query as a connecting link query (CLQ). The acquired knowledge reduces OKBC to KBC and makes the inference task feasible.
2. Spareseness of KB. A main issue of all PR methods like C-PR is the connectivity of the KB graph. If there is no path connecting INLINEFORM0 and INLINEFORM1 in the graph, path enumeration of C-PR gets stuck and inference becomes infeasible. In such cases, LiLi uses a template relation (“@-?-@") as the missing link marker to connect entity-pairs and continues feature extraction. A path containing “@-?-@" is called an incomplete path. Thus, the extracted feature set contains both complete (no missing link) and incomplete paths. Next, LiLi selects an incomplete path from the feature set and asks the user to provide a link for path completion. We refer to such a query as missing link query (MLQ).
3. Limitation in user knowledge. If the user is unable to respond to MLQs or CLQs, LiLi uses a guessing mechanism (discussed later) to fill the gap. This enables LiLi to continue its inference even if the user cannot answer a system question.
Components of LiLi
As lifelong learning needs to retain knowledge learned from past tasks and use it to help future learning BIBREF31 , LiLi uses a Knowledge Store (KS) for knowledge retention. KS has four components: (i) Knowledge Graph ( INLINEFORM0 ): INLINEFORM1 (the KB) is initialized with base KB triples (see §4) and gets updated over time with the acquired knowledge. (ii) Relation-Entity Matrix ( INLINEFORM2 ): INLINEFORM3 is a sparse matrix, with rows as relations and columns as entity-pairs and is used by the prediction model. Given a triple ( INLINEFORM4 , INLINEFORM5 , INLINEFORM6 ) INLINEFORM7 , we set INLINEFORM8 [ INLINEFORM9 , ( INLINEFORM10 , INLINEFORM11 )] = 1 indicating INLINEFORM12 occurs for pair ( INLINEFORM13 , INLINEFORM14 ). (iii) Task Experience Store ( INLINEFORM15 ): INLINEFORM16 stores the predictive performance of LiLi on past learned tasks in terms of Matthews correlation coefficient (MCC) that measures the quality of binary classification. So, for two tasks INLINEFORM17 and INLINEFORM18 (each relation is a task), if INLINEFORM19 [ INLINEFORM20 ] INLINEFORM21 INLINEFORM22 [ INLINEFORM23 ] [where INLINEFORM24 [ INLINEFORM25 ]=MCC( INLINEFORM26 )], we say C-PR has learned INLINEFORM27 well compared to INLINEFORM28 . (iv) Incomplete Feature DB ( INLINEFORM29 ): INLINEFORM30 stores the frequency of an incomplete path INLINEFORM31 in the form of a tuple ( INLINEFORM32 , INLINEFORM33 , INLINEFORM34 ) and is used in formulating MLQs. INLINEFORM35 [( INLINEFORM36 , INLINEFORM37 , INLINEFORM38 )] = INLINEFORM39 implies LiLi has extracted incomplete path INLINEFORM40 INLINEFORM41 times involving entity-pair INLINEFORM42 [( INLINEFORM43 , INLINEFORM44 )] for query relation INLINEFORM45 .
The RL model learns even after training whenever it encounters an unseen state (in testing) and thus, gets updated over time. KS is updated continuously over time as a result of the execution of LiLi and takes part in future learning. The prediction model uses lifelong learning (LL), where we transfer knowledge (parameter values) from the model for a past most similar task to help learn for the current task. Similar tasks are identified by factorizing INLINEFORM0 and computing a task similarity matrix INLINEFORM1 . Besides LL, LiLi uses INLINEFORM2 to identify poorly learned past tasks and acquire more clues for them to improve its skillset over time.
LiLi also uses a stack, called Inference Stack ( INLINEFORM0 ) to hold query and its state information for RL. LiLi always processes stack top ( INLINEFORM1 [top]). The clues from the user get stored in INLINEFORM2 on top of the query during strategy execution and processed first. Thus, the prediction model for INLINEFORM3 is learned before performing inference on query, transforming OKBC to a KBC problem. Table 1 shows the parameters of LiLi used in the following sections.
Working of LiLi
Given an OKBC query ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ), we represent it as a data instance INLINEFORM3 . INLINEFORM4 consists of INLINEFORM5 (the query triple), INLINEFORM6 (interaction limit set for INLINEFORM7 ), INLINEFORM8 (experience list storing the transition history of MDP for INLINEFORM9 in RL) and INLINEFORM10 (mode of INLINEFORM11 ) denoting if INLINEFORM12 is ` INLINEFORM13 ' (training), ` INLINEFORM14 ' (validation), ` INLINEFORM15 ' (evaluation) or ` INLINEFORM16 ' (clue) instance and INLINEFORM17 (feature set). We denote INLINEFORM18 ( INLINEFORM19 ) as the set of all complete (incomplete) path features in INLINEFORM20 . Given a data instance INLINEFORM21 , LiLi starts its initialization as follows: it sets the state as INLINEFORM22 (based on INLINEFORM23 , explained later), pushes the query tuple ( INLINEFORM24 , INLINEFORM25 ) into INLINEFORM26 and feeds INLINEFORM27 [top] to the RL-model for strategy formulation from INLINEFORM28 .
Inference Strategy Formulation. We view solving the strategy formulation problem as learning to play an inference game, where the goal is to formulate a strategy that "makes the inference task possible". Considering PR methods, inference is possible, iff (1) INLINEFORM0 becomes known to its KB (by acquiring clues when INLINEFORM1 is unknown) and (2) path features are extracted between INLINEFORM2 and INLINEFORM3 (which inturn requires INLINEFORM4 and INLINEFORM5 to be known to KB). If these conditions are met at the end of an episode (when strategy formulation finishes for a given query) of the game, LiLi wins and thus, it trains the prediction model for INLINEFORM6 and uses it for inference.
LiLi's strategy formulation is modeled as a Markov Decision Process (MDP) with finite state ( INLINEFORM0 ) and action ( INLINEFORM1 ) spaces. A state INLINEFORM2 consists of 10 binary state variables (Table 2), each of which keeps track of results of an action INLINEFORM3 taken by LiLi and thus, records the progress in inference process made so far. INLINEFORM4 is the initial state with all state bits set as 0. If the data instance (query) is a clue [ INLINEFORM5 ], INLINEFORM6 [CLUE] is set as 1. INLINEFORM7 consists of 6 actions (Table 3). INLINEFORM8 , INLINEFORM9 , INLINEFORM10 are processing actions and INLINEFORM11 , INLINEFORM12 , INLINEFORM13 are interactive actions. Whenever INLINEFORM14 is executed, the MDP reaches the terminal state. Given an action INLINEFORM15 in state INLINEFORM16 , if INLINEFORM17 is invalid in INLINEFORM21 or the objective of INLINEFORM22 is unsatisfied (* marked the condition in INLINEFORM23 ), RL receives a negative reward (empirically set); else receives a positive reward.. We use Q-learning BIBREF38 with INLINEFORM24 -greedy strategy to learn the optimal policy for training the RL model. Note that, the inference strategy is independent of KB type and correctness of prediction. Thus, the RL-model is trained only once from scratch (reused thereafter for other KBs) and also, independently of the prediction model.
Sometimes the training dataset may not be enough to learn optimal policy for all INLINEFORM0 . Thus, encountering an unseen state during test can make RL-model clueless about the action. Given a state INLINEFORM1 , whenever an invalid INLINEFORM2 is chosen, LiLi remains in INLINEFORM3 . For INLINEFORM4 , LiLi remains in INLINEFORM5 untill INLINEFORM6 (see Table 1 for INLINEFORM7 ). So, if the state remains the same for ( INLINEFORM8 +1) times, it implies LiLi has encountered a fault (an unseen state). RL-model instantly switches to the training mode and randomly explores INLINEFORM9 to learn the optimal action (fault-tolerant learning). While exploring INLINEFORM10 , the model chooses INLINEFORM11 only when it has tried all other INLINEFORM12 to avoid abrupt end of episode.
Execution of Actions. At any given point in time, let ( INLINEFORM0 , INLINEFORM1 ) be the current INLINEFORM2 [top], INLINEFORM3 is the chosen action and the current version of KS components are INLINEFORM4 , INLINEFORM5 , INLINEFORM6 and INLINEFORM7 . Then, if INLINEFORM8 is invalid in INLINEFORM9 , LiLi only updates INLINEFORM10 [top] with ( INLINEFORM11 , INLINEFORM12 ) and returns INLINEFORM13 [top] to RL-model. In this process, LiLi adds experience ( INLINEFORM14 , INLINEFORM15 , INLINEFORM16 , INLINEFORM17 ) in INLINEFORM18 and then, replaces INLINEFORM19 [top] with ( INLINEFORM20 , INLINEFORM21 ). If INLINEFORM22 is valid in INLINEFORM23 , LiLi first sets the next state INLINEFORM24 and performs a sequence of operations INLINEFORM25 based on INLINEFORM26 (discussed below). Unless specified, in INLINEFORM27 , LiLi always monitors INLINEFORM28 and if INLINEFORM29 becomes 0, LiLi sets INLINEFORM30 . Also, whenever LiLi asks the user a query, INLINEFORM31 is decremented by 1. Once INLINEFORM32 ends, LiLi updates INLINEFORM33 [top] with ( INLINEFORM34 , INLINEFORM35 ) and returns INLINEFORM36 [top] to RL-model for choosing the next action.
In INLINEFORM0 , LiLi searches INLINEFORM1 , INLINEFORM2 , INLINEFORM3 in INLINEFORM4 and sets appropriate bits in INLINEFORM5 (see Table 2). If INLINEFORM6 was unknown before and is just added to INLINEFORM7 or is in the bottom INLINEFORM8 % (see Table 1 for INLINEFORM9 ) of INLINEFORM10 , LiLi randomly sets INLINEFORM14 with probability INLINEFORM15 . If INLINEFORM16 is a clue and INLINEFORM17 , LiLi updates KS with triple INLINEFORM18 , where ( INLINEFORM19 , INLINEFORM20 , INLINEFORM21 ) and ( INLINEFORM22 , INLINEFORM23 , INLINEFORM24 ) gets added to INLINEFORM25 and INLINEFORM26 , INLINEFORM27 are set as 1.
In INLINEFORM0 , LiLi asks the user to provide a clue (+ve instance) for INLINEFORM1 and corrupts INLINEFORM2 and INLINEFORM3 of the clue once at a time, to generate -ve instances by sampling nodes from INLINEFORM4 . These instances help in training prediction model for INLINEFORM5 while executing INLINEFORM6 .
In INLINEFORM0 , LiLi selects an incomplete path INLINEFORM1 from INLINEFORM2 to formulate MLQ, such that INLINEFORM3 is most frequently observed for INLINEFORM4 and INLINEFORM5 is high, given by INLINEFORM6 . Here, INLINEFORM7 denotes the contextual similarity BIBREF16 of entity-pair INLINEFORM8 . If INLINEFORM9 is high, INLINEFORM10 is more likely to possess a relation between them and so, is a good candidate for formulating MLQ. When the user does not respond to MLQ (or CLQ in INLINEFORM11 ), the guessing mechanism is used, which works as follows: Since contextual similarity of entity-pairs is highly correlated with their class labels BIBREF16 , LiLi divides the similarity range [-1, 1] into three segments, using a low ( INLINEFORM12 ) and high ( INLINEFORM13 ) similarity threshold and replaces the missing link with INLINEFORM14 in INLINEFORM15 to make it complete as follows: If INLINEFORM16 , INLINEFORM17 = “@-LooselyRelatedTo-@"; else if INLINEFORM18 , INLINEFORM19 =“@-NotRelatedTo-@"; Otherwise, INLINEFORM20 =“@-RelatedTo-@".
In INLINEFORM0 , LiLi asks CLQs for connecting unknown entities INLINEFORM1 and/or INLINEFORM2 with INLINEFORM3 by selecting the most contextually relevant node (wrt INLINEFORM4 , INLINEFORM5 ) from INLINEFORM6 , given by link INLINEFORM7 . We adopt the contextual relevance idea in BIBREF16 which is computed using word embedding BIBREF39
In INLINEFORM0 , LiLi extracts path features INLINEFORM1 between ( INLINEFORM2 , INLINEFORM3 ) and updates INLINEFORM4 with incomplete features from INLINEFORM5 . LiLi always trains the prediction model with complete features INLINEFORM6 and once INLINEFORM7 or INLINEFORM8 , LiLi stops asking MLQs. Thus, in both INLINEFORM9 and INLINEFORM10 , LiLi always monitors INLINEFORM11 to check for the said requirements and sets INLINEFORM12 to control interactions.
In INLINEFORM0 , if LiLi wins the episode, it adds INLINEFORM1 in one of data buffers INLINEFORM2 based on its mode INLINEFORM3 . E.g., if INLINEFORM4 or INLINEFORM5 , INLINEFORM6 is used for training and added to INLINEFORM7 . Similarly validation buffer INLINEFORM8 and evaluation buffer INLINEFORM9 are populated. If INLINEFORM10 , LiLi invokes the prediction model for INLINEFORM11 .
Lifelong Relation Prediction. Given a relation INLINEFORM0 , LiLi uses INLINEFORM1 and INLINEFORM2 (see INLINEFORM3 ) to train a prediction model (say, INLINEFORM4 ) with parameters INLINEFORM5 . For a unknown INLINEFORM6 , the clue instances get stored in INLINEFORM7 and INLINEFORM8 . Thus, LiLi populates INLINEFORM9 by taking 10% (see §4) of the instances from INLINEFORM10 and starts the training. For INLINEFORM11 , LiLi uses a LSTM BIBREF40 to compose the vector representation of each feature INLINEFORM12 as INLINEFORM13 and vector representation of INLINEFORM14 as INLINEFORM15 . Next, LiLi computes the prediction value, INLINEFORM16 as sigmoid of the mean cosine similarity of all features and INLINEFORM17 , given by INLINEFORM18 ) and maximize the log-likelihood of INLINEFORM19 for training. Once INLINEFORM20 is trained, LiLi updates INLINEFORM21 [ INLINEFORM22 ] using INLINEFORM23 . We also train an inverse model for INLINEFORM24 , INLINEFORM25 by reversing the path features in INLINEFORM26 and INLINEFORM27 which help in lifelong learning (discussed below). Unlike BIBREF20 , BIBREF21 , while predicting the label for INLINEFORM28 , we compute a relation-specific prediction threshold INLINEFORM29 corresponding to INLINEFORM30 using INLINEFORM31 as: INLINEFORM32 and infer INLINEFORM33 as +ve if INLINEFORM34 and -ve otherwise. Here, INLINEFORM35 ( INLINEFORM36 ) is the mean prediction value for all +ve (-ve) examples in INLINEFORM37 .
Models trained on a few examples (e.g., clues acquired for unknown INLINEFORM0 ) with randomly initialized weights often perform poorly due to underfitting. Thus, we transfer knowledge (weights) from the past most similar (wrt INLINEFORM1 ) task in a lifelong learning manner BIBREF31 . LiLi uses INLINEFORM2 to find the past most similar task for INLINEFORM3 as follows: LiLi computes trancated SVD of INLINEFORM4 as INLINEFORM5 and then, the similarity matrix INLINEFORM6 . INLINEFORM7 provides the similarity between relations INLINEFORM8 and INLINEFORM9 in INLINEFORM10 . Thus, LiLi chooses a source relation INLINEFORM11 to transfer weights. Here, INLINEFORM12 is the set of all INLINEFORM13 and INLINEFORM14 for which LiLi has already learned a prediction model. Now, if INLINEFORM15 or INLINEFORM16 , LiLi randomly initializes the weights INLINEFORM17 for INLINEFORM18 and proceeds with the training. Otherwise, LiLi uses INLINEFORM19 as initial weights and fine-tunes INLINEFORM20 with a low learning rate.
A Running Example. Considering the example shown in Figure 1, LiLi works as follows: first, LiLi executes INLINEFORM0 and detects that the source entity “Obama" and query relation “CitizenOf" are unknown. Thus, LiLi executes INLINEFORM1 to acquire clue (SF1) for “CitizenOf" and pushes the clue (+ve example) and two generated -ve examples into INLINEFORM2 . Once the clues are processed and a prediction model is trained for “CitizenOf" by formulating separate strategies for them, LiLi becomes aware of “CitizenOf". Now, as the clues have already been popped from INLINEFORM3 , the query becomes INLINEFORM4 and the strategy formulation process for the query resumes. Next, LiLi asks user to provide a connecting link for “Obama" by performing INLINEFORM5 . Now, the query entities and relation being known, LiLi enumerates paths between “Obama" and “USA" by performing INLINEFORM6 . Let an extracted path be “ INLINEFORM7 " with missing link between ( INLINEFORM8 , INLINEFORM9 ). LiLi asks the user to fill the link by performing INLINEFORM10 and then, extracts the complete feature “ INLINEFORM11 ". The feature set is then fed to the prediction model and inference is made as a result of INLINEFORM12 . Thus, the formulated inference strategy is: “ INLINEFORM13 ".
Experiments
We now evaluate LiLi in terms of its predictive performance and strategy formulation abilities.
Data: We use two standard datasets (see Table 4): (1) Freebase FB15k, and (2) WordNet INLINEFORM0 . Using each dataset, we build a fairly large graph and use it as the original KB ( INLINEFORM1 ) for evaluation. We also augment INLINEFORM2 with inverse triples ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 ) for each ( INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ) following existing KBC methods.
Parameter Settings. Unless specified, the empirically set parameters (see Table 1) of LiLi are: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . For training RL-model with INLINEFORM11 -greedy strategy, we use INLINEFORM12 , INLINEFORM13 , pre-training steps=50000. We used Keras deep learning library to implement and train the prediction model. We set batch-size as 128, max. training epoch as 150, dropout as 0.2, hidden units and embedding size as 300 and learning rate as 5e-3 which is reduced gradually on plateau with factor 0.5 and patience 5. Adam optimizer and early stopping were used in training. We also shuffle INLINEFORM14 in each epoch and adjust class weights inversely proportional to class frequencies in INLINEFORM15 .
Labeled Dataset Generation and Simulated User Creation. We create a simulated user for each KB to evaluate LiLi. We create the labeled datasets, the simulated user’s knowledge base ( INLINEFORM0 ), and the base KB ( INLINEFORM1 ) from INLINEFORM2 . INLINEFORM3 used as the initial KB graph ( INLINEFORM4 ) of LiLi.
We followed BIBREF16 for labeled dataset generation. For Freebase, we found 86 relations with INLINEFORM0 triples and randomly selected 50 from various domains. We randomly shuffle the list of 50 relations, select 25% of them as unknown relations and consider the rest (75%) as known relations. For each known relation INLINEFORM1 , we randomly shuffle the list of distinct triples for INLINEFORM2 , choose 1000 triples and split them into 60% training, 10% validation and 20% test. Rest 10% along with the leftover (not included in the list of 1000) triples are added to INLINEFORM3 . For each unknown relation INLINEFORM4 , we remove all triples of INLINEFORM5 from INLINEFORM6 and add them to INLINEFORM7 . In this process, we also randomly choose 20% triples as test instances for unknown INLINEFORM8 which are excluded from INLINEFORM9 . Note that, now INLINEFORM10 has at least 10% of chosen triples for each INLINEFORM11 (known and unknown) and so, user is always able to provide clues for both cases. For each labeled dataset, we randomly choose 10% of the entities present in dataset triples, remove triples involving those entities from INLINEFORM12 and add to INLINEFORM13 . At this point, INLINEFORM14 gets reduced to INLINEFORM15 and is used as INLINEFORM16 for LiLi. The dataset stats in Table 4 shows that the base KB (60% triples of INLINEFORM17 ) is highly sparse (compared to original KB) which makes the inference task much harder. WordNet dataset being small, we select all 18 relations for evaluation and create labeled dataset, INLINEFORM18 and INLINEFORM19 following Freebase. Although the user may provide clues 100% of the time, it often cannot respond to MLQs and CLQs (due to lack of required triples/facts). Thus, we further enrich INLINEFORM20 with external KB triples.
Given a relation INLINEFORM0 and an observed triple ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ) in training or testing, the pair ( INLINEFORM4 , INLINEFORM5 ) is regarded as a +ve instance for INLINEFORM6 . Following BIBREF18 , for each +ve instance ( INLINEFORM7 , INLINEFORM8 ), we generate two negative ones, one by randomly corrupting the source INLINEFORM9 , and the other by corrupting the target INLINEFORM10 . Note that, the test triples are not in INLINEFORM11 or INLINEFORM12 and none of the -ve instances overlap with the +ve ones.
Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.
Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.
Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.
F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .
BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@" blindly, no guessing mechanism.
w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.
Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score.
Results and Analysis
Evaluation-I: Strategy Formulation Ability. Table 5 shows the list of inference strategies formulated by LiLi for various INLINEFORM0 and INLINEFORM1 , which control the strategy formulation of LiLi. When INLINEFORM2 , LiLi cannot interact with user and works like a closed-world method. Thus, INLINEFORM3 drops significantly (0.47). When INLINEFORM4 , i.e. with only one interaction per query, LiLi acquires knowledge well for instances where either of the entities or relation is unknown. However, as one unknown entity may appear in multiple test triples, once the entity becomes known, LiLi doesn’t need to ask for it again and can perform inference on future triples causing significant increase in INLINEFORM5 (0.97). When INLINEFORM6 , LiLi is able to perform inference on all instances and INLINEFORM7 becomes 1. For INLINEFORM8 , LiLi uses INLINEFORM9 only once (as only one MLQ satisfies INLINEFORM10 ) compared to INLINEFORM11 . In summary, LiLi’s RL-model can effectively formulate query-specific inference strategies (based on specified parameter values). Evaluation-II: Predictive Performance. Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. Single model performs better than Sep for unknown relations due to the sharing of knowledge (weights) across tasks. However, for known relations, performance drops because, as a new relation arrives to the system, old weights get corrupted and catastrophic forgetting occurs. For unknown relations, as the relations are evaluated just after training, there is no chance for catastrophic forgetting. The performance improvement ( INLINEFORM15 ) of LiLi over F-th on Freebase signifies that the relation-specific threshold INLINEFORM16 works better than fixed threshold 0.5 because, if all prediction values for test instances lie above (or below) 0.5, F-th predicts all instances as +ve (-ve) which degrades its performance. Due to the utilization of contextual similarity (highly correlated with class labels) of entity-pairs, LiLi’s guessing mechanism works better ( INLINEFORM17 ) than blind guessing (BG). The past task selection mechanism of LiLi also improves its performance over w/o PTS, as it acquires more clues during testing for poorly performed tasks (evaluated on validation set). For Freebase, due to a large number of past tasks [9 (25% of 38)], the performance difference is more significant ( INLINEFORM18 ). For WordNet, the number is relatively small [3 (25% of 14)] and hence, the difference is not significant.
Evaluation-III: User Interaction vs. Performance. Table 7 shows the results of LiLi by varying clue acquisition rate ( INLINEFORM0 ). We use Freebase for tuning INLINEFORM1 due to its higher number of unknown test relations compared to WordNet. LiLi’s performance improves significantly as it acquires more clues from the user. The results on INLINEFORM2 outperforms ( INLINEFORM3 ) that on INLINEFORM4 . Table 8 shows the results of LiLi on user responses to MLQ’s and CLQ’s. Answering MLQ’s and CLQ’s is very hard for simulated users (unlike crowd-sourcing) as often INLINEFORM5 lacks the required triple. Thus, we attempt to analyze how the performance is effected if the user does not respond at all. The results show a clear trend in overall performance improvement when the user responds. However, the improvement is not significant as the simulated user’s query satisfaction rate (1% MLQs and 10% CLQs) is very small. But, the analysis shows the effectiveness of LiLi’s guessing mechanism and continual learning ability that help in achieving avg. +ve F1 of 0.57 and 0.62 on FB and WN respectively with minimal participation of the user.
Conclusion
In this paper, we are interested in building a generic engine for continuous knowledge learning in human-machine conversations. We first showed that the problem underlying the engine can be formulated as an open-world knowledge base completion (OKBC) problem. We then proposed an lifelong interactive learning and inference (LiLi) approach to solving the OKBC problem. OKBC is a generalization of KBC. LiLi solves the OKBC problem by first formulating a query-specific inference strategy using RL and then executing it to solve the problem by interacting with the user in a lifelong learning manner. Experimental results showed the effectiveness of LiLi in terms of both predictive quality and strategy formulation ability. We believe that a system with the LiLi approach can serve as a knowledge learning engine for conversations. Our future work will improve LiLi to make more accurate.
Acknowledgments
This work was supported in part by National Science Foundation (NSF) under grant no. IIS-1407927 and IIS-1650900, and a gift from Huawei Technologies Co Ltd.
|
What metrics are used to establish that this makes chatbots more knowledgeable and better at learning and conversation?
|
Coverage, Avg. MCC and avg. +ve F1 score
| 5,869
|
qasper
|
8k
|
Introduction
The irony is a kind of figurative language, which is widely used on social media BIBREF0 . The irony is defined as a clash between the intended meaning of a sentence and its literal meaning BIBREF1 . As an important aspect of language, irony plays an essential role in sentiment analysis BIBREF2 , BIBREF0 and opinion mining BIBREF3 , BIBREF4 .
Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is “I hate it when my plans get ruined" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as “I like it when my plans get ruined". The speaker uses “like" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence “I hate to be ignored", we train our model to generate an ironic sentence such as “I love to be ignored". Although there is “love" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation.
Recently, unsupervised style transfer becomes a very popular topic. Many state-of-the-art studies try to solve the task with sequence-to-sequence (seq2seq) framework. There are three main ways to build up models. The first is to learn a latent style-independent content representation and generate sentences with the content representation and another style BIBREF6 , BIBREF7 . The second is to directly transfer sentences from one style to another under the control of classifiers and reinforcement learning BIBREF8 . The third is to remove style attribute words from the input sentence and combine the remaining content with new style attribute words BIBREF9 , BIBREF10 . The first method usually obtains better performances via adversarial training with discriminators. The style-independent content representation, nevertheless, is not easily obtained BIBREF11 , which results in poor performances. The second method is suitable for complex styles which are difficult to model and describe. The model can learn the deep semantic features by itself but sometimes the model is sensitive to parameters and hard to train. The third method succeeds to preserve content but cannot work for some complex styles such as democratic and republican. Sentences with those styles usually do not have specific style attribute words. Unfortunately, due to the lack of large irony dataset and difficulties of modeling ironies, there has been little work trying to generate ironies based on seq2seq framework as far as we know. Inspired by methods for style transfer, we decide to implement a specifically designed model based on unsupervised style transfer to explore irony generation.
In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. Then, due to the lack of parallel data, we propose a novel model to transfer non-ironic sentences to ironic sentences in an unsupervised way. As ironic style is hard to model and describe, we implement our model with the control of classifiers and reinforcement learning. Different from other studies in style transfer, the transformation from non-ironic to ironic sentences has to preserve sentiment polarity as mentioned above. Therefore, we not only design an irony reward to control the irony accuracy and implement denoising auto-encoder and back-translation to control content preservation but also design a sentiment reward to control sentiment preservation.
Experimental results demonstrate that our model achieves a high irony accuracy with well-preserved sentiment and content. The contributions of our work are as follows:
Related Work
Style Transfer: As irony is a complicated style and hard to model with some specific style attribute words, we mainly focus on studies without editing style attribute words.
Some studies are trying to disentangle style representation from content representation. In BIBREF12 , authors leverage adversarial networks to learn separate content representations and style representations. In BIBREF13 and BIBREF6 , researchers combine variational auto-encoders (VAEs) with style discriminators.
However, some recent studies BIBREF11 reveal that the disentanglement of content and style representations may not be achieved in practice. Therefore, some other research studies BIBREF9 , BIBREF10 strive to separate content and style by removing stylistic words. Nonetheless, many non-ironic sentences do not have specific stylistic words and as a result, we find it difficult to transfer non-ironic sentences to ironic sentences through this way in practice.
Besides, some other research studies do not disentangle style from content but directly learn representations of sentences. In BIBREF8 , authors propose a dual reinforcement learning framework without separating content and style representations. In BIBREF7 , researchers utilize a machine translation model to learn a sentence representation preserving the meaning of the sentence but reducing stylistic properties. In this method, the quality of generated sentences relies on the performance of classifiers to a large extent. Meanwhile, such models are usually sensitive to parameters and difficult to train. In contrast, we combine a pre-training process with reinforcement learning to build up a stable language model and design special rewards for our task.
Irony Detection: With the development of social media, irony detection becomes a more important task. Methods for irony detection can be mainly divided into two categories: methods based on feature engineering and methods based on neural networks.
As for methods based on feature engineering, In BIBREF1 , authors investigate pragmatic phenomena and various irony markers. In BIBREF14 , researchers leverage a combination of sentiment, distributional semantic and text surface features. Those models rely on hand-crafted features and are hard to implement.
When it comes to methods based on neural networks, long short-term memory (LSTM) BIBREF15 network is widely used and is very efficient for irony detection. In BIBREF16 , a tweet is divided into two segments and a subtract layer is implemented to calculate the difference between two segments in order to determine whether the tweet is ironic. In BIBREF17 , authors utilize a recurrent neural network with Bi-LSTM and self-attention without hand-crafted features. In BIBREF18 , researchers propose a system based on a densely connected LSTM network.
Our Dataset
In this section, we describe how we build our dataset with tweets. First, we crawl over 2M tweets from twitter using GetOldTweets-python. We crawl English tweets from 04/09/2012 to /12/18/2018. We first remove all re-tweets and use langdetect to remove all non-English sentences. Then, we remove hashtags attached at the end of the tweets because they are usually not parts of sentences and will confuse our language model. After that, we utilize Ekphrasis to process tweets. We remove URLs and restore remaining hashtags, elongated words, repeated words, and all-capitalized words. To simplify our dataset, We replace all “ INLINEFORM0 money INLINEFORM1 " and “ INLINEFORM2 time INLINEFORM3 " tokens with “ INLINEFORM4 number INLINEFORM5 " token when using Ekphrasis. And we delete sentences whose lengths are less than 10 or greater than 40. In order to restore abbreviations, we download an abbreviation dictionary from webopedia and restore abbreviations to normal words or phrases according to the dictionary. Finally, we remove sentences which have more than two rare words (appearing less than three times) in order to constrain the size of vocabulary. Finally, we get 662,530 sentences after pre-processing.
As neural networks are proved effective in irony detection, we decide to implement a neural classifier in order to classify the sentences into ironic and non-ironic sentences. However, the only high-quality irony dataset we can obtain is the dataset of Semeval-2018 Task 3 and the dataset is pretty small, which will cause overfitting to complex models. Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection. We train the model with the dataset of Semeval-2018 Task 3. After classification, we get 262,755 ironic sentences and 399,775 non-ironic sentences. According to our observation, not all non-ironic sentences are suitable to be transferred into ironic sentences. For example, “just hanging out . watching . is it monday yet" is hard to transfer because it does not have an explicit sentiment polarity. So we remove all interrogative sentences from the non-ironic sentences and only obtain the sentences which have words expressing strong sentiments. We evaluate the sentiment polarity of each word with TextBlob and we view those words with sentiment scores greater than 0.5 or less than -0.5 as words expressing strong sentiments. Finally, we build our irony dataset with 262,755 ironic sentences and 102,330 non-ironic sentences.
[t] Irony Generation Algorithm
INLINEFORM0 pre-train with auto-encoder Pre-train INLINEFORM1 , INLINEFORM2 with INLINEFORM3 using MLE based on Eq. EQREF16 Pre-train INLINEFORM4 , INLINEFORM5 with INLINEFORM6 using MLE based on Eq. EQREF17 INLINEFORM7 pre-train with back-translation Pre-train INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , INLINEFORM11 with INLINEFORM12 using MLE based on Eq. EQREF19 Pre-train INLINEFORM13 , INLINEFORM14 , INLINEFORM15 , INLINEFORM16 with INLINEFORM17 using MLE based on Eq. EQREF20
INLINEFORM0 train with RL each epoch e = 1, 2, ..., INLINEFORM1 INLINEFORM2 train non-irony2irony with RL INLINEFORM3 in N INLINEFORM4 update INLINEFORM5 , INLINEFORM6 , using INLINEFORM7 based on Eq. EQREF29 INLINEFORM8 back-translation INLINEFORM9 INLINEFORM10 INLINEFORM11 update INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 using MLE based on Eq. EQREF19 INLINEFORM16 train irony2non-irony with RL INLINEFORM17 in I INLINEFORM18 update INLINEFORM19 , INLINEFORM20 , using INLINEFORM21 similar to Eq. EQREF29 INLINEFORM22 back-translation INLINEFORM23 INLINEFORM24 INLINEFORM25 update INLINEFORM26 , INLINEFORM27 , INLINEFORM28 , INLINEFORM29 using MLE based on Eq. EQREF20
Our Method
Given two non-parallel corpora: non-ironic corpus N={ INLINEFORM0 , INLINEFORM1 , ..., INLINEFORM2 } and ironic corpus I={ INLINEFORM3 , INLINEFORM4 , ..., INLINEFORM5 }, the goal of our irony generation model is to generate an ironic sentence from a non-ironic sentence while preserving the content and sentiment polarity of the source input sentence. We implement an encoder-decoder framework where two encoders are utilized to encode ironic sentences and non-ironic sentences respectively and two decoders are utilized to decode ironic sentences and non-ironic sentences from latent representations respectively. In order to enforce a shared latent space, we share two layers on both the encoder side and the decoder side. Our model architecture is illustrated in Figure FIGREF13 . We denote irony encoder as INLINEFORM6 , irony decoder as INLINEFORM7 and non-irony encoder as INLINEFORM8 , non-irony decoder as INLINEFORM9 . Their parameters are INLINEFORM10 , INLINEFORM11 , INLINEFORM12 and INLINEFORM13 .
Our irony generation algorithm is shown in Algorithm SECREF3 . We first pre-train our model using denoising auto-encoder and back-translation to build up language models for both styles (section SECREF14 ). Then we implement reinforcement learning to train the model to transfer sentences from one style to another (section SECREF21 ). Meanwhile, to achieve content preservation, we utilize back-translation for one time in every INLINEFORM0 time steps.
Pretraining
In order to build up our language model and preserve the content, we apply the auto-encoder model. To prevent the model from simply copying the input sentence, we randomly add some noises in the input sentence. Specifically, for every word in the input sentence, there is 10% chance that we delete it, 10 % chance that we duplicate it, 10% chance that we swap it with the next word, or it remains unchanged. We first encode the input sentence INLINEFORM0 or INLINEFORM1 with respective encoder INLINEFORM2 or INLINEFORM3 to obtain its latent representation INLINEFORM4 or INLINEFORM5 and reconstruct the input sentence with the latent representation and respective decoder. So we can get the reconstruction loss for auto-encoder INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
In addition to denoising auto-encoder, we implement back-translation BIBREF19 to generate a pseudo-parallel corpus. Suppose our model takes non-ironic sentence INLINEFORM0 as input. We first encode INLINEFORM1 with INLINEFORM2 to obtain its latent representation INLINEFORM3 and decode the latent representation with INLINEFORM4 to get a transferred sentence INLINEFORM5 . Then we encode INLINEFORM6 with INLINEFORM7 and decode its latent representation with INLINEFORM8 to reconstruct the original input sentence INLINEFORM9 . Therefore, our reconstruction loss for back-translation INLINEFORM10 : DISPLAYFORM0
And if our model takes ironic sentence INLINEFORM0 as input, we can get the reconstruction loss for back-translation as: DISPLAYFORM0
Reinforcement Learning
Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively.
A pre-trained binary irony classifier based on CNN BIBREF20 is used to evaluate how ironic a sentence is. We denote the parameter of the classifier as INLINEFORM0 and it is fixed during the training process.
In order to facilitate the transformation, we design the irony reward as the difference between the irony score of the input sentence and that of the output sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our irony reward is defined as: DISPLAYFORM0
where INLINEFORM0 denotes ironic style and INLINEFORM1 is the probability of that a sentence INLINEFORM2 is ironic.
To preserve the sentiment polarity of the input sentence, we also need to use classifiers to evaluate the sentiment polarity of the sentences. However, the sentiment analysis of ironic sentences and non-ironic sentences are different. In the case of figurative languages such as irony, sarcasm or metaphor, the sentiment polarity of the literal meaning may differ significantly from that of the intended figurative meaning BIBREF0 . As we aim to train our model to transfer sentences from non-ironic to ironic, using only one classifier is not enough. As a result, we implement two pre-trained sentiment classifiers for non-ironic sentences and ironic sentences respectively. We denote the parameter of the sentiment classifier for ironic sentences as INLINEFORM0 and that of the sentiment classifier for non-ironic sentences as INLINEFORM1 .
A challenge, when we implement two classifiers to evaluate the sentiment polarity, is that the two classifiers trained with different datasets may have different distributions of scores. That means we cannot directly calculate the sentiment reward with scores applied by two classifiers. To alleviate this problem and standardize the prediction results of two classifiers, we set a threshold for each classifier and subtract the respective threshold from scores applied by the classifier to obtain the comparative sentiment polarity score. We get the optimal threshold by maximizing the ability of the classifier according to the distribution of our training data.
We denote the threshold of ironic sentiment classifier as INLINEFORM0 and the threshold of non-ironic sentiment classifier as INLINEFORM1 . The standardized sentiment score is defined as INLINEFORM2 and INLINEFORM3 where INLINEFORM4 denotes the positive sentiment polarity and INLINEFORM5 is the probability of that a sentence is positive in sentiment polarity.
As mentioned above, the input sentence and the generated sentence should express the same sentiment. For example, if we input a non-ironic sentence “I hate to be ignored" which is negative in sentiment polarity, the generated ironic sentence should be also negative, such as “I love to be ignored". To achieve sentiment preservation, we design the sentiment reward as that one minus the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our sentiment reward is defined as: DISPLAYFORM0
To encourage our model to focus on both the irony accuracy and the sentiment preservation, we apply the harmonic mean of irony reward and sentiment reward: DISPLAYFORM0
Policy Gradient
The policy gradient algorithm BIBREF21 is a simple but widely-used algorithm in reinforcement learning. It is used to maximize the expected reward INLINEFORM0 . The objective function to minimize is defined as: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 is the reward of INLINEFORM2 and INLINEFORM3 is the input size.
Training Details
INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 in our model are Transformers BIBREF22 with 4 layers and 2 shared layers. The word embeddings of 128 dimensions are learned during the training process. Our maximum sentence length is set as 40. The optimizer is Adam BIBREF23 and the learning rate is INLINEFORM4 . The batch size is 32 and harmonic weight INLINEFORM5 in Eq.9 is 0.5. We set the interval INLINEFORM6 as 200. The model is pre-trained for 6 epochs and trained for 15 epochs for reinforcement learning.
Irony Classifier: We implement a CNN classifier trained with our irony dataset. All the CNN classifiers we utilize in this paper use the same parameters as BIBREF20 .
Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies. The LSTM network is trained with the dataset of Semeval 2015 Task 11 BIBREF0 which is used for the sentiment analysis of figurative language in twitter. Then, we use the positive ironies and negative ironies to train the CNN sentiment classifier for irony.
Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies. Then we use the positive and negative non-ironies to train the sentiment classifier for non-irony.
Baselines
We compare our model with the following state-of-art generative models:
BackTrans BIBREF7 : In BIBREF7 , authors propose a model using machine translation in order to preserve the meaning of the sentence while reducing stylistic properties.
Unpaired BIBREF10 : In BIBREF10 , researchers implement a method to remove emotional words and add desired sentiment controlled by reinforcement learning.
CrossAlign BIBREF6 : In BIBREF6 , authors leverage refined alignment of latent representations to perform style transfer and a cross-aligned auto-encoder is implemented.
CPTG BIBREF24 : An interpolated reconstruction loss is introduced in BIBREF24 and a discriminator is implemented to control attributes in this work.
DualRL BIBREF8 : In BIBREF8 , researchers use two reinforcement rewards simultaneously to control style accuracy and content preservation.
Evaluation Metrics
In order to evaluate sentiment preservation, we use the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. We call the value as sentiment delta (senti delta). Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated.
We first sample 50 non-ironic input sentences and their corresponding output sentences of different models. Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content). The best output is ranked with 1 and the worst output is ranked with 6. That means that the smaller our human evaluation value is, the better the corresponding model is.
Results and Discussions
Table TABREF35 shows the automatic evaluation results of the models in the transformation from non-ironic sentences to ironic sentences. From the results, our model obtains the best result in sentiment delta. The DualRL model achieves the highest result in other metrics, but most of its outputs are the almost same as the input sentences. So it is reasonable that DualRL system outperforms ours in these metrics but it actually does not transfer the non-ironic sentences to ironic sentences at all. From this perspective, we cannot view DualRL as an effective model for irony generation. In contrast, our model gets results close to those of DualRL and obtains a balance between irony accuracy, sentiment preservation, and content preservation if we also consider the irony accuracy discussed below.
And from human evaluation results shown in Table TABREF36 , our model gets the best average rank in irony accuracy. And as mentioned above, the DualRL model usually does not change the input sentence and outputs the same sentence. Therefore, it is reasonable that it obtains the best rank in sentiment and content preservation and ours is the second. However, it still demonstrates that our model, instead of changing nothing, transfers the style of the input sentence with content and sentiment preservation at the same time.
Case Study
In the section, we present some example outputs of different models. Table TABREF37 shows the results of the transformation from non-ironic sentences to ironic sentences. We can observe that: (1) The BackTrans system, the Unpaired system, the CrossAlign system and the CPTG system tends to generate sentences which are towards irony but do not preserve content. (2) The DualRL system preserves content and sentiment very well but even does not change the input sentence. (3) Our model considers both aspects and achieves a better balance among irony accuracy, sentiment and content preservation.
Error Analysis
Although our model outperforms other style transfer baselines according to automatic and human evaluation results, there are still some failure cases because irony generation is still a very challenging task. We would like to share the issues we meet during our experiments and our solutions to some of them in this section.
No Change: As mentioned above, many style transfer models, such as DualRL, tend to make few changes to the input sentence and output the same sentence. Actually, this is a common issue for unsupervised style transfer systems and we also meet it during our experiments. The main reason for the issue is that rewards for content preservation are too prominent and rewards for style accuracy cannot work well. In contrast, in order to guarantee the readability and fluency of the output sentence, we also cannot emphasize too much on rewards for style accuracy because it may cause some other issues such as word repetition mentioned below. A method to solve the problem is tuning hyperparameters and this is also the method we implement in this work. As for content preservation, maybe MLE methods such as back-translation are not enough because they tend to force models to generate specific words. In the future, we should further design some more suitable methods to control content preservation for models without disentangling style and content representations, such as DualRL and ours.
Word Repetition: During our experiments, we observe that some of the outputs prefer to repeat the same word as shown in Table TABREF38 . This is because reinforcement learning rewards encourage the model to generate words which can get high scores from classifiers and even back-translation cannot stop it. Our solution is that we can lower the probability of decoding a word in decoders if the word has been generated in the previous time steps during testing. We also try to implement this method during training time but obtain worse performances because it may limit the effects of training. Some previous studies utilize language models to control the fluency of the output sentence and we also try this method. Nonetheless, pre-training a language model with tweets and using it to generate rewards is difficult because tweets are more casual and have more noise. Rewards from that kind of language model are usually not accurate and may confuse the model. In the future, we should come up with better methods to model language fluency with the consideration of irony accuracy, sentiment and content preservation, especially for tweets.
Improper Words: As ironic style is hard for our model to learn, it may generate some improper words which make the sentence strange. As the example shown in the Table TABREF38 , the sentiment word in the input sentence is “wonderful" and the model should change it into a negative word such as “sad" to make the output sentence ironic. However, the model changes “friday" and “fifa" which are not related to ironic styles. We have not found a very effective method to address this issue and maybe we should further explore stronger models to learn ironic styles better.
Additional Experiments
In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences. Sometimes ironies are hard to understand and may cause misunderstanding, for which our task also explores the transformation from ironic sentences to non-ironic sentences.
As shown in Table TABREF46 , we also conduct automatic evaluations and the conclusions are similar to those of the transformation from non-ironic sentences to ironic sentences. As for human evaluation results in Table TABREF47 , our model still can achieve the second-best results in sentiment and content preservation. Nevertheless, DualRL system and ours get poor performances in irony accuracy. The reason may be that the other four baselines tend to generate common and even not fluent sentences which are irrelevant to the input sentences and are hard to be identified as ironies. So annotators usually mark these output sentences as non-ironic sentences, which causes these models to obtain better performances than DualRL and ours but much poorer results in sentiment and content preservation. Some examples are shown in Table TABREF52 .
Conclusion and Future Work
In this paper, we first systematically define irony generation based on style transfer. Because of the lack of irony data, we make use of twitter and build a large-scale dataset. In order to control irony accuracy, sentiment preservation and content preservation at the same time, we also design a combination of rewards for reinforcement learning and incorporate reinforcement learning with a pre-training process. Experimental results demonstrate that our model outperforms other generative models and our rewards are effective. Although our model design is effective, there are still many errors and we systematically analyze them. In the future, we are interested in exploring these directions and our work may extend to other kinds of ironies which are more difficult to model.
|
What experiments are conducted?
|
Irony Classifier, Sentiment Classifier for Irony, Sentiment Classifier for Non-irony, transformation from ironic sentences to non-ironic sentences
| 4,600
|
qasper
|
8k
|
Introduction
Sarcasm is defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. As the fields of affective computing and sentiment analysis have gained increasing popularity BIBREF0 , it is a major concern to detect sarcastic, ironic, and metaphoric expressions. Sarcasm, especially, is key for sentiment analysis as it can completely flip the polarity of opinions. Understanding the ground truth, or the facts about a given event, allows for the detection of contradiction between the objective polarity of the event (usually negative) and its sarcastic characteristic by the author (usually positive), as in “I love the pain of breakup”. Obtaining such knowledge is, however, very difficult.
In our experiments, we exposed the classifier to such knowledge extracted indirectly from Twitter. Namely, we used Twitter data crawled in a time period, which likely contain both the sarcastic and non-sarcastic accounts of an event or similar events. We believe that unambiguous non-sarcastic sentences provided the classifier with the ground-truth polarity of those events, which the classifier could then contrast with the opposite estimations in sarcastic sentences. Twitter is a more suitable resource for this purpose than blog posts, because the polarity of short tweets is easier to detect (as all the information necessary to detect polarity is likely to be contained in the same sentence) and because the Twitter API makes it easy to collect a large corpus of tweets containing both sarcastic and non-sarcastic examples of the same event.
Sometimes, however, just knowing the ground truth or simple facts on the topic is not enough, as the text may refer to other events in order to express sarcasm. For example, the sentence “If Hillary wins, she will surely be pleased to recall Monica each time she enters the Oval Office :P :D”, which refers to the 2016 US presidential election campaign and to the events of early 1990's related to the US president Clinton, is sarcastic because Hillary, a candidate and Clinton's wife, would in fact not be pleased to recall her husband's alleged past affair with Monica Lewinsky. The system, however, would need a considerable amount of facts, commonsense knowledge, anaphora resolution, and logical reasoning to draw such a conclusion. In this paper, we will not deal with such complex cases.
Existing works on sarcasm detection have mainly focused on unigrams and the use of emoticons BIBREF1 , BIBREF2 , BIBREF3 , unsupervised pattern mining approach BIBREF4 , semi-supervised approach BIBREF5 and n-grams based approach BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 with sentiment features. Instead, we propose a framework that learns sarcasm features automatically from a sarcasm corpus using a convolutional neural network (CNN). We also investigate whether features extracted using the pre-trained sentiment, emotion and personality models can improve sarcasm detection performance. Our approach uses relatively lower dimensional feature vectors and outperforms the state of the art on different datasets. In summary, the main contributions of this paper are the following:
The rest of the paper is organized as follows: Section SECREF2 proposes a brief literature review on sarcasm detection; Section SECREF4 presents the proposed approach; experimental results and thorough discussion on the experiments are given in Section SECREF5 ; finally, Section SECREF6 concludes the paper.
Related Works
NLP research is gradually evolving from lexical to compositional semantics BIBREF10 through the adoption of novel meaning-preserving and context-aware paradigms such as convolutional networks BIBREF11 , recurrent belief networks BIBREF12 , statistical learning theory BIBREF13 , convolutional multiple kernel learning BIBREF14 , and commonsense reasoning BIBREF15 . But while other NLP tasks have been extensively investigated, sarcasm detection is a relatively new research topic which has gained increasing interest only recently, partly thanks to the rise of social media analytics and sentiment analysis. Sentiment analysis BIBREF16 and using multimodal information as a new trend BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF14 is a popular branch of NLP research that aims to understand sentiment of documents automatically using combination of various machine learning approaches BIBREF21 , BIBREF22 , BIBREF20 , BIBREF23 .
An early work in this field was done by BIBREF6 on a dataset of 6,600 manually annotated Amazon reviews using a kNN-classifier over punctuation-based and pattern-based features, i.e., ordered sequence of high frequency words. BIBREF1 used support vector machine (SVM) and logistic regression over a feature set of unigrams, dictionary-based lexical features and pragmatic features (e.g., emoticons) and compared the performance of the classifier with that of humans. BIBREF24 described a set of textual features for recognizing irony at a linguistic level, especially in short texts created via Twitter, and constructed a new model that was assessed along two dimensions: representativeness and relevance. BIBREF5 used the presence of a positive sentiment in close proximity of a negative situation phrase as a feature for sarcasm detection. BIBREF25 used the Balanced Window algorithm for classifying Dutch tweets as sarcastic vs. non-sarcastic; n-grams (uni, bi and tri) and intensifiers were used as features for classification.
BIBREF26 compared the performance of different classifiers on the Amazon review dataset using the imbalance between the sentiment expressed by the review and the user-given star rating. Features based on frequency (gap between rare and common words), written spoken gap (in terms of difference between usage), synonyms (based on the difference in frequency of synonyms) and ambiguity (number of words with many synonyms) were used by BIBREF3 for sarcasm detection in tweets. BIBREF9 proposed the use of implicit incongruity and explicit incongruity based features along with lexical and pragmatic features, such as emoticons and punctuation marks. Their method is very much similar to the method proposed by BIBREF5 except BIBREF9 used explicit incongruity features. Their method outperforms the approach by BIBREF5 on two datasets.
BIBREF8 compared the performance with different language-independent features and pre-processing techniques for classifying text as sarcastic and non-sarcastic. The comparison was done over three Twitter dataset in two different languages, two of these in English with a balanced and an imbalanced distribution and the third one in Czech. The feature set included n-grams, word-shape patterns, pointedness and punctuation-based features.
In this work, we use features extracted from a deep CNN for sarcasm detection. Some of the key differences between the proposed approach and existing methods include the use of a relatively smaller feature set, automatic feature extraction, the use of deep networks, and the adoption of pre-trained NLP models.
Sentiment Analysis and Sarcasm Detection
Sarcasm detection is an important subtask of sentiment analysis BIBREF27 . Since sarcastic sentences are subjective, they carry sentiment and emotion-bearing information. Most of the studies in the literature BIBREF28 , BIBREF29 , BIBREF9 , BIBREF30 include sentiment features in sarcasm detection with the use of a state-of-the-art sentiment lexicon. Below, we explain how sentiment information is key to express sarcastic opinions and the approach we undertake to exploit such information for sarcasm detection.
In general, most sarcastic sentences contradict the fact. In the sentence “I love the pain present in the breakups" (Figure FIGREF4 ), for example, the word “love" contradicts “pain present in the breakups”, because in general no-one loves to be in pain. In this case, the fact (i.e., “pain in the breakups") and the contradictory statement to that fact (i.e., “I love") express sentiment explicitly. Sentiment shifts from positive to negative but, according to sentic patterns BIBREF31 , the literal sentiment remains positive. Sentic patterns, in fact, aim to detect the polarity expressed by the speaker; thus, whenever the construction “I love” is encountered, the sentence is positive no matter what comes after it (e.g., “I love the movie that you hate”). In this case, however, the sentence carries sarcasm and, hence, reflects the negative sentiment of the speaker.
In another example (Figure FIGREF4 ), the fact, i.e., “I left the theater during the interval", has implicit negative sentiment. The statement “I love the movie" contradicts the fact “I left the theater during the interval"; thus, the sentence is sarcastic. Also in this case the sentiment shifts from positive to negative and hints at the sarcastic nature of the opinion.
The above discussion has made clear that sentiment (and, in particular, sentiment shifts) can largely help to detect sarcasm. In order to include sentiment shifting into the proposed framework, we train a sentiment model for sentiment-specific feature extraction. Training with a CNN helps to combine the local features in the lower layers into global features in the higher layers. We do not make use of sentic patterns BIBREF31 in this paper but we do plan to explore that research direction as a part of our future work. In the literature, it is found that sarcasm is user-specific too, i.e., some users have a particular tendency to post more sarcastic tweets than others. This acts as a primary intuition for us to extract personality-based features for sarcasm detection.
The Proposed Framework
As discussed in the literature BIBREF5 , sarcasm detection may depend on sentiment and other cognitive aspects. For this reason, we incorporate both sentiment and emotion clues in our framework. Along with these, we also argue that personality of the opinion holder is an important factor for sarcasm detection. In order to address all of these variables, we create different models for each of them, namely: sentiment, emotion and personality. The idea is to train each model on its corresponding benchmark dataset and, hence, use such pre-trained models together to extract sarcasm-related features from the sarcasm datasets.
Now, the viable research question here is - Do these models help to improve sarcasm detection performance?' Literature shows that they improve the performance but not significantly. Thus, do we need to consider those factors in spotting sarcastic sentences? Aren't n-grams enough for sarcasm detection? Throughout the rest of this paper, we address these questions in detail. The training of each model is done using a CNN. Below, we explain the framework in detail. Then, we discuss the pre-trained models. Figure FIGREF6 presents a visualization of the proposed framework.
General CNN Framework
CNN can automatically extract key features from the training data. It grasps contextual local features from a sentence and, after several convolution operations, it forms a global feature vector out of those local features. CNN does not need the hand-crafted features used in traditional supervised classifiers. Such hand-crafted features are difficult to compute and a good guess for encoding the features is always necessary in order to get satisfactory results. CNN, instead, uses a hierarchy of local features which are important to learn context. The hand-crafted features often ignore such a hierarchy of local features.
Features extracted by CNN can therefore be used instead of hand-crafted features, as they carry more useful information. The idea behind convolution is to take the dot product of a vector of INLINEFORM0 weights INLINEFORM1 also known as kernel vector with each INLINEFORM2 -gram in the sentence INLINEFORM3 to obtain another sequence of features INLINEFORM4 . DISPLAYFORM0
Thus, a max pooling operation is applied over the feature map and the maximum value INLINEFORM0 is taken as the feature corresponding to this particular kernel vector. Similarly, varying kernel vectors and window sizes are used to obtain multiple features BIBREF32 . For each word INLINEFORM1 in the vocabulary, a INLINEFORM2 -dimensional vector representation is given in a look up table that is learned from the data BIBREF33 . The vector representation of a sentence, hence, is a concatenation of vectors for individual words. Similarly, we can have look up tables for other features. One might want to provide features other than words if these features are suspected to be helpful. The convolution kernels are then applied to word vectors instead of individual words.
We use these features to train higher layers of the CNN, in order to represent bigger groups of words in sentences. We denote the feature learned at hidden neuron INLINEFORM0 in layer INLINEFORM1 as INLINEFORM2 . Multiple features may be learned in parallel in the same CNN layer. The features learned in each layer are used to train the next layer: DISPLAYFORM0
where * indicates convolution and INLINEFORM0 is a weight kernel for hidden neuron INLINEFORM1 and INLINEFORM2 is the total number of hidden neurons. The CNN sentence model preserves the order of words by adopting convolution kernels of gradually increasing sizes that span an increasing number of words and ultimately the entire sentence. As mentioned above, each word in a sentence is represented using word embeddings.
We employ the publicly available word2vec vectors, which were trained on 100 billion words from Google News. The vectors are of dimensionality 300, trained using the continuous bag-of-words architecture BIBREF33 . Words not present in the set of pre-trained words are initialized randomly. However, while training the neural network, we use non-static representations. These include the word vectors, taken as input, into the list of parameters to be learned during training.
Two primary reasons motivated us to use non-static channels as opposed to static ones. Firstly, the common presence of informal language and words in tweets resulted in a relatively high random initialization of word vectors due to the unavailability of these words in the word2vec dictionary. Secondly, sarcastic sentences are known to include polarity shifts in sentimental and emotional degrees. For example, “I love the pain present in breakups" is a sarcastic sentence with a significant change in sentimental polarity. As word2vec was not trained to incorporate these nuances, we allow our models to update the embeddings during training in order to include them. Each sentence is wrapped to a window of INLINEFORM0 , where INLINEFORM1 is the maximum number of words amongst all sentences in the dataset. We use the output of the fully-connected layer of the network as our feature vector.
We have done two kinds of experiments: firstly, we used CNN for the classification; secondly, we extracted features from the fully-connected layer of the CNN and fed them to an SVM for the final classification. The latter CNN-SVM scheme is quite useful for text classification as shown by Poria et al. BIBREF18 . We carry out n-fold cross-validation on the dataset using CNN. In every fold iteration, in order to obtain the training and test features, the output of the fully-connected layer is treated as features to be used for the final classification using SVM. Table TABREF12 shows the training settings for each CNN model developed in this work. ReLU is used as the non-linear activation function of the network. The network configurations of all models developed in this work are given in Table TABREF12 .
Sentiment Feature Extraction Model
As discussed above, sentiment clues play an important role for sarcastic sentence detection. In our work, we train a CNN (see Section SECREF5 for details) on a sentiment benchmark dataset. This pre-trained model is then used to extract features from the sarcastic datasets. In particular, we use Semeval 2014 BIBREF34 Twitter Sentiment Analysis Dataset for the training. This dataset contains 9,497 tweets out of which 5,895 are positive, 3,131 are negative and 471 are neutral. The fully-connected layer of the CNN used for sentiment feature extraction has 100 neurons, so 100 features are extracted from this pre-trained model. The final softmax determines whether a sentence is positive, negative or neutral. Thus, we have three neurons in the softmax layer.
Emotion Feature Extraction Model
We use the CNN structure as described in Section SECREF5 for emotional feature extraction. As a dataset for extracting emotion-related features, we use the corpus developed by BIBREF35 . This dataset consists of blog posts labeled by their corresponding emotion categories. As emotion taxonomy, the authors used six basic emotions, i.e., Anger, Disgust, Surprise, Sadness, Joy and Fear. In particular, the blog posts were split into sentences and each sentence was labeled. The dataset contains 5,205 sentences labeled by one of the emotion labels. After employing this model on the sarcasm dataset, we obtained a 150-dimensional feature vector from the fully-connected layer. As the aim of training is to classify each sentence into one of the six emotion classes, we used six neurons in the softmax layer.
Personality Feature Extraction Model
Detecting personality from text is a well-known challenging problem. In our work, we use five personality traits described by BIBREF36 , i.e., Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism, sometimes abbreviated as OCEAN (by their first letters). As a training dataset, we use the corpus developed by BIBREF36 , which contains 2,400 essays labeled by one of the five personality traits each.
The fully-connected layer has 150 neurons, which are treated as the features. We concatenate the feature vector of each personality dimension in order to create the final feature vector. Thus, the personality model ultimately extracts a 750-dimensional feature vector (150-dimensional feature vector for each of the five personality traits). This network is replicated five times, one for each personality trait. In particular, we create a CNN for each personality trait and the aim of each CNN is to classify a sentence into binary classes, i.e., whether it expresses a personality trait or not.
Baseline Method and Features
CNN can also be employed on the sarcasm datasets in order to identify sarcastic and non-sarcastic tweets. We term the features extracted from this network baseline features, the method as baseline method and the CNN architecture used in this baseline method as baseline CNN. Since the fully-connected layer has 100 neurons, we have 100 baseline features in our experiment. This method is termed baseline method as it directly aims to classify a sentence as sarcastic vs non-sarcastic. The baseline CNN extracts the inherent semantics from the sarcastic corpus by employing deep domain understanding. The process of using baseline features with other features extracted from the pre-trained model is described in Section SECREF24 .
Experimental Results and Discussion
In this section, we present the experimental results using different feature combinations and compare them with the state of the art. For each feature we show the results using only CNN and using CNN-SVM (i.e., when the features extracted by CNN are fed to the SVM). Macro-F1 measure is used as an evaluation scheme in the experiments.
Sarcasm Datasets Used in the Experiment
This dataset was created by BIBREF8 . The tweets were downloaded from Twitter using #sarcasm as a marker for sarcastic tweets. It is a monolingual English dataset which consists of a balanced distribution of 50,000 sarcastic tweets and 50,000 non-sarcastic tweets.
Since sarcastic tweets are less frequently used BIBREF8 , we also need to investigate the robustness of the selected features and the model trained on these features on an imbalanced dataset. To this end, we used another English dataset from BIBREF8 . It consists of 25,000 sarcastic tweets and 75,000 non-sarcastic tweets.
We have obtained this dataset from The Sarcasm Detector. It contains 120,000 tweets, out of which 20,000 are sarcastic and 100,000 are non-sarcastic. We randomly sampled 10,000 sarcastic and 20,000 non-sarcastic tweets from the dataset. Visualization of both the original and subset data show similar characteristics.
A two-step methodology has been employed in filtering the datasets used in our experiments. Firstly, we identified and removed all the “user", “URL" and “hashtag" references present in the tweets using efficient regular expressions. Special emphasis was given to this step to avoid traces of hashtags, which might trigger the models to provide biased results. Secondly, we used NLTK Twitter Tokenizer to ensure proper tokenization of words along with special symbols and emoticons. Since our deep CNNs extract contextual information present in tweets, we include emoticons as part of the vocabulary. This enables the emoticons to hold a place in the word embedding space and aid in providing information about the emotions present in the sentence.
Merging the Features
Throughout this research, we have carried out several experiments with various feature combinations. For the sake of clarity, we explain below how the features extracted using difference models are merged.
In the standard feature merging process, we first extract the features from all deep CNN based feature extraction models and then we concatenate them. Afterwards, SVM is employed on the resulted feature vector.
In another setting, we use the features extracted from the pre-trained models as the static channels of features in the CNN of the baseline method. These features are appended to the hidden layer of the baseline CNN, preceding the final output softmax layer.
For comparison, we have re-implemented the state-of-the-art methods. Since BIBREF9 did not mention about the sentiment lexicon they use in the experiment, we used SenticNet BIBREF37 in the re-implementation of their method.
Results on Dataset 1
As shown in Table TABREF29 , for every feature CNN-SVM outperforms the performance of the CNN. Following BIBREF6 , we have carried out a 5-fold cross-validation on this dataset. The baseline features ( SECREF16 ) perform best among other features. Among all the pre-trained models, the sentiment model (F1-score: 87.00%) achieves better performance in comparison with the other two pre-trained models. Interestingly, when we merge the baseline features with the features extracted by the pre-trained deep NLP models, we only get 0.11% improvement over the F-score. It means that the baseline features alone are quite capable to detect sarcasm. On the other hand, when we combine sentiment, emotion and personality features, we obtain 90.70% F1-score. This indicates that the pre-trained features are indeed useful for sarcasm detection. We also compare our approach with the best research study conducted on this dataset (Table TABREF30 ). Both the proposed baseline model and the baseline + sentiment + emotion + personality model outperform the state of the art BIBREF9 , BIBREF8 . One important difference with the state of the art is that BIBREF8 used relatively larger feature vector size ( INLINEFORM0 500,000) than we used in our experiment (1,100). This not only prevents our model to overfit the data but also speeds up the computation. Thus, we obtain an improvement in the overall performance with automatic feature extraction using a relatively lower dimensional feature space.
In the literature, word n-grams, skipgrams and character n-grams are used as baseline features. According to Ptacek et al. BIBREF8 , these baseline features along with the other features (sentiment features and part-of-speech based features) produced the best performance. However, Ptacek et al. did not analyze the performance of these features when they were not used with the baseline features. Pre-trained word embeddings play an important role in the performance of the classifier because, when we use randomly generated embeddings, performance falls down to 86.23% using all features.
Results on Dataset 2
5-fold cross-validation has been carried out on Dataset 2. Also for this dataset, we get the best accuracy when we use all features. Baseline features have performed significantly better (F1-score: 92.32%) than all other features. Supporting the observations we have made from the experiments on Dataset 1, we see CNN-SVM outperforming CNN on Dataset 2. However, when we use all the features, CNN alone (F1-score: 89.73%) does not outperform the state of the art BIBREF8 (F1-score: 92.37%). As shown in Table TABREF30 , CNN-SVM on the baseline + sentiment + emotion + personality feature set outperforms the state of the art (F1-score: 94.80%). Among the pre-trained models, the sentiment model performs best (F1-score: 87.00%).
Table TABREF29 shows the performance of different feature combinations. The gap between the F1-scores of only baseline features and all features is larger on the imbalanced dataset than the balanced dataset. This supports our claim that sentiment, emotion and personality features are very useful for sarcasm detection, thanks to the pre-trained models. The F1-score using sentiment features when combined with baseline features is 94.60%. On both of the datasets, emotion and sentiment features perform better than the personality features. Interestingly, using only sentiment, emotion and personality features, we achieve 90.90% F1-score.
Results on Dataset 3
Experimental results on Dataset 3 show the similar trends (Table TABREF30 ) as compared to Dataset 1 and Dataset 2. The highest performance (F1-score 93.30%) is obtained when we combine baseline features with sentiment, emotion and personality features. In this case, also CNN-SVM consistently performs better than CNN for every feature combination. The sentiment model is found to be the best pre-trained model. F1-score of 84.43% is obtained when we merge sentiment, emotion and personality features.
Dataset 3 is more complex and non-linear in nature compared to the other two datasets. As shown in Table TABREF30 , the methods by BIBREF9 and BIBREF8 perform poorly on this dataset. The TP rate achieved by BIBREF9 is only 10.07% and that means their method suffers badly on complex data. The approach of BIBREF8 has also failed to perform well on Dataset 3, achieving 62.37% with a better TP rate of 22.15% than BIBREF9 . On the other hand, our proposed model performs consistently well on this dataset achieving 93.30%.
Testing Generalizability of the Models and Discussions
To test the generalization capability of the proposed approach, we perform training on Dataset 1 and test on Dataset 3. The F1-score drops down dramatically to 33.05%. In order to understand this finding, we visualize each dataset using PCA (Figure FIGREF17 ). It depicts that, although Dataset 1 is mostly linearly separable, Dataset 3 is not. A linear kernel that performs well on Dataset 1 fails to provide good performance on Dataset 3. If we use RBF kernel, it overfits the data and produces worse results than what we get using linear kernel. Similar trends are seen in the performance of other two state-of-the-art approaches BIBREF9 , BIBREF8 . Thus, we decide to perform training on Dataset 3 and test on the Dataset 1. As expected better performance is obtained with F1-score 76.78%. However, the other two state-of-the-art approaches fail to perform well in this setting. While the method by BIBREF9 obtains F1-score of 47.32%, the approach by BIBREF8 achieves 53.02% F1-score when trained on Dataset 3 and tested on Dataset 1. Below, we discuss about this generalizability issue of the models developed or referred in this paper.
As discussed in the introduction, sarcasm is very much topic-dependent and highly contextual. For example, let us consider the tweet “I am so glad to see Tanzania played very well, I can now sleep well :P". Unless one knows that Tanzania actually did not play well in that game, it is not possible to spot the sarcastic nature of this sentence. Thus, an n-gram based sarcasm detector trained at time INLINEFORM0 may perform poorly to detect sarcasm in the tweets crawled at time INLINEFORM1 (given that there is a considerable gap between these time stamps) because of the diversity of the topics (new events occur, new topics are discussed) of the tweets. Sentiment and other contextual clues can help to spot the sarcastic nature in this kind of tweets. A highly positive statement which ends with a emoticon expressing joke can be sarcastic.
State-of-the-art methods lack these contextual information which, in our case, we extract using pre-trained sentiment, emotion and personality models. Not only these pre-trained models, the baseline method (baseline CNN architecture) performs better than the state-of-the-art models in this generalizability test setting. In our generalizability test, when the pre-trained features are used with baseline features, we get 4.19% F1-score improvement over the baseline features. On the other hand, when they are not used with the baseline features, together they produce 64.25% F1-score.
Another important fact is that an n-grams model cannot perform well on unseen data unless it is trained on a very large corpus. If most of the n-grams extracted from the unseen data are not in the vocabulary of the already trained n-grams model, in fact, the model will produce a very sparse feature vector representation of the dataset. Instead, we use the word2vec embeddings as the source of the features, as word2vec allows for the computation of similarities between unseen data and training data.
Baseline Features vs Pre-trained Features
Our experimental results show that the baseline features outperform the pre-trained features for sarcasm detection. However, the combination of pre-trained features and baseline features beats both of themselves alone. It is counterintuitive, since experimental results prove that both of those features learn almost the same global and contextual features. In particular, baseline network dominates over pre-trained network as the former learns most of the features learned by the latter. Nonetheless, the combination of baseline and pre-trained classifiers improves the overall performance and generalizability, hence proving their effectiveness in sarcasm detection. Experimental results show that sentiment and emotion features are the most useful features, besides baseline features (Figure FIGREF36 ). Therefore, in order to reach a better understanding of the relation between personality features among themselves and with other pre-trained features, we carried out Spearman correlation testing. Results, displayed in Table TABREF39 , show that those features are highly correlated with each other.
Conclusion
In this work, we developed pre-trained sentiment, emotion and personality models for identifying sarcastic text using CNN, which are found to be very effective for sarcasm detection. In the future, we plan to evaluate the performance of the proposed method on a large corpus and other domain-dependent corpora. Future work will also focus on analyzing past tweets and activities of users in order to better understand their personality and profile and, hence, further improve the disambiguation between sarcastic and non-sarcastic text.
|
What are the network's baseline features?
|
The features extracted from CNN.
| 4,855
|
qasper
|
8k
|
Introduction
Understanding what a question is asking is one of the first steps that humans use to work towards an answer. In the context of question answering, question classification allows automated systems to intelligently target their inference systems to domain-specific solvers capable of addressing specific kinds of questions and problem solving methods with high confidence and answer accuracy BIBREF0 , BIBREF1 .
To date, question classification has primarily been studied in the context of open-domain TREC questions BIBREF2 , with smaller recent datasets available in the biomedical BIBREF3 , BIBREF4 and education BIBREF5 domains. The open-domain TREC question corpus is a set of 5,952 short factoid questions paired with a taxonomy developed by Li and Roth BIBREF6 that includes 6 coarse answer types (such as entities, locations, and numbers), and 50 fine-grained types (e.g. specific kinds of entities, such as animals or vehicles). While a wide variety of syntactic, semantic, and other features and classification methods have been applied to this task, culminating in near-perfect classification performance BIBREF7 , recent work has demonstrated that QC methods developed on TREC questions generally fail to transfer to datasets with more complex questions such as those in the biomedical domain BIBREF3 , likely due in part to the simplicity and syntactic regularity of the questions, and the ability for simpler term-frequency models to achieve near-ceiling performance BIBREF8 . In this work we explore question classification in the context of multiple choice science exams. Standardized science exams have been proposed as a challenge task for question answering BIBREF9 , as most questions contain a variety of challenging inference problems BIBREF10 , BIBREF11 , require detailed scientific and common-sense knowledge to answer and explain the reasoning behind those answers BIBREF12 , and questions are often embedded in complex examples or other distractors. Question classification taxonomies and annotation are difficult and expensive to generate, and because of the unavailability of this data, to date most models for science questions use one or a small number of generic solvers that perform little or no question decomposition BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Our long-term interest is in developing methods that intelligently target their inferences to generate both correct answers and compelling human-readable explanations for the reasoning behind those answers. The lack of targeted solving – using the same methods for inferring answers to spatial questions about planetary motion, chemical questions about photosynthesis, and electrical questions about circuit continuity – is a substantial barrier to increasing performance (see Figure FIGREF1 ).
To address this need for developing methods of targetted inference, this work makes the following contributions:
Related work
Question classification typically makes use of a combination of syntactic, semantic, surface, and embedding methods. Syntactic patterns BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and syntactic dependencies BIBREF3 have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories (for the medical domain) to help mitigate sparsity BIBREF22 , BIBREF23 , BIBREF24 . Keyword identification helps identify specific terms useful for classification BIBREF25 , BIBREF3 , BIBREF26 . Similarly, named entity recognizers BIBREF6 , BIBREF27 or lists of semantically related words BIBREF6 , BIBREF24 can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings BIBREF28 , BIBREF29 . Here, we empirically demonstrate many of these existing methods do not transfer to the science domain.
The highest performing question classification systems tend to make use of customized rule-based pattern matching BIBREF30 , BIBREF7 , or a combination of rule-based and machine learning approaches BIBREF19 , at the expense of increased model construction time. A recent emphasis on learned methods has shown a large set of CNN BIBREF29 and LSTM BIBREF8 variants achieve similar accuracy on TREC question classification, with these models exhibiting at best small gains over simple term frequency models. These recent developments echo the observations of Roberts et al. BIBREF3 , who showed that existing methods beyond term frequency models failed to generalize to medical domain questions. Here we show that strong performance across multiple datasets is possible using a single learned model.
Due to the cost involved in their construction, question classification datasets and classification taxonomies tend to be small, which can create methodological challenges. Roberts et al. BIBREF3 generated the next-largest dataset from TREC, containing 2,936 consumer health questions classified into 13 question categories. More recently, Wasim et al. BIBREF4 generated a small corpus of 780 biomedical domain questions organized into 88 categories. In the education domain, Godea et al. BIBREF5 collected a set of 1,155 classroom questions and organized these into 16 categories. To enable a detailed study of science domain question classification, here we construct a large-scale challenge dataset that exceeds the size and classification specificity of other datasets, in many cases by nearly an order of magnitude.
Questions and Classification Taxonomy
Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 .
Taxonomy: Starting with the syllabus for the NY Regents exam, we identified 9 coarse question categories (Astronomy, Earth Science, Energy, Forces, Life Science, Matter, Safety, Scientific Method, Other), then through a data-driven analysis of 3 exam study guides and the 3,370 training questions, expanded the taxonomy to include 462 fine-grained categories across 6 hierarchical levels of granularity. The taxonomy is designed to allow categorizing questions into broad curriculum topics at it's coarsest level, while labels at full specificity separate questions into narrow problem domains suitable for targetted inference methods. Because of its size, a subset of the classification taxonomy is shown in Table TABREF6 , with the full taxonomy and class definitions included in the supplementary material.
Annotation: Because of the complexity of the questions, it is possible for one question to bridge multiple categories – for example, a wind power generation question may span both renewable energy and energy conversion. We allow up to 2 labels per question, and found that 16% of questions required multiple labels. Each question was independently annotated by two annotators, with the lead annotator a domain expert in standardized exams. Annotators first independently annotated the entire question set, then questions without complete agreement were discussed until resolution. Before resolution, interannotator agreement (Cohen's Kappa) was INLINEFORM0 = 0.58 at the finest level of granularity, and INLINEFORM1 = 0.85 when considering only the coarsest 9 categories. This is considered moderate to strong agreement BIBREF32 . Based on the results of our error analysis (see Section SECREF21 ), we estimate the overall accuracy of the question classification labels after resolution to be approximately 96%. While the full taxonomy contains 462 fine-grained categories derived from both standardized questions, study guides, and exam syllabi, we observed only 406 of these categories are tested in the ARC question set.
Question Classification on Science Exams
We identified 5 common models in previous work primarily intended for learned classifiers rather than hand-crafted rules. We adapt these models to a multi-label hierarchical classification task by training a series of one-vs-all binary classifiers BIBREF34 , one for each label in the taxonomy. With the exception of the CNN and BERT models, following previous work BIBREF19 , BIBREF3 , BIBREF8 we make use of an SVM classifier using the LIBSvM framework BIBREF35 with a linear kernel. Models are trained and evaluated from coarse to fine levels of taxonomic specificity. At each level of taxonomic evaluation, a set of non-overlapping confidence scores for each binary classifier are generated and sorted to produce a list of ranked label predictions. We evaluate these ranks using Mean Average Precision BIBREF36 . ARC questions are evaluated using the standard 3,370 questions for training, 869 for development, and 3,548 for testing.
N-grams, POS, Hierarchical features: A baseline bag-of-words model incorporating both tagged and untagged unigrams and bigams. We also implement the hierarchical classification feature of Li and Roth BIBREF6 , where for a given question, the output of the classifier at coarser levels of granularity serves as input to the classifier at the current level of granularity.
Dependencies: Bigrams of Stanford dependencies BIBREF37 . For each word, we create one unlabeled bigram for each outgoing link from that word to it's dependency BIBREF20 , BIBREF3 .
Question Expansion with Hypernyms: We perform hypernym expansion BIBREF22 , BIBREF19 , BIBREF3 by including WordNet hypernyms BIBREF38 for the root dependency word, and words on it's direct outgoing links. WordNet sense is identified using Lesk word-sense disambiguation BIBREF39 , using question text for context. We implement the heuristic of Van-tu et al. BIBREF24 , where more distant hypernyms receive less weight.
Essential Terms: Though not previously reported for QC, we make use of unigrams of keywords extracted using the Science Exam Essential Term Extractor of Khashabi et al. BIBREF26 . For each keyword, we create one binary unigram feature.
CNN: Kim BIBREF28 demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. Lei et al. BIBREF29 showed that 10 CNN variants perform within +/-2% of Kim's BIBREF28 model on TREC QC. We report performance of our best CNN model based on the MP-CNN architecture of Rao et al. BIBREF41 , which works to establish the similarity between question text and the definition text of the question classes. We adapt the MP-CNN model, which uses a “Siamese” structure BIBREF33 , to create separate representations for both the question and the question class. The model then makes use of a triple ranking loss function to minimize the distance between the representations of questions and the correct class while simultaneously maximising the distance between questions and incorrect classes. We optimize the network using the method of Tu BIBREF42 .
BERT-QC (This work): We make use of BERT BIBREF43 , a language model using bidirectional encoder representations from transformers, in a sentence-classification configuration. As the original settings of BERT do not support multi-label classification scenarios, and training a series of 406 binary classifiers would be computationally expensive, we use the duplication method of Tsoumakas et al. BIBREF34 where we enumerate multi-label questions as multiple single-label instances during training by duplicating question text, and assigning each instance one of the multiple labels. Evaluation follows the standard procedure where we generate a list of ranked class predictions based on class probabilities, and use this to calculate Mean Average Precision (MAP) and Precision@1 (P@1). As shown in Table TABREF7 , this BERT-QC model achieves our best question classification performance, significantly exceeding baseline performance on ARC by 0.12 MAP and 13.5% P@1.
Comparison with Benchmark Datasets
Apart from term frequency methods, question classification methods developed on one dataset generally do not exhibit strong transfer performance to other datasets BIBREF3 . While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC BIBREF6 , while surpassing state-of-the-art performance on the GARD corpus of consumer health questions BIBREF3 and MLBioMedLAT corpus of biomedical questions BIBREF4 . As such, BERT-QC is the first model to achieve strong performance across more than one question classification dataset.
TREC question classification is divided into separate coarse and fine-grained tasks centered around inferring the expected answer types of short open-domain factoid questions. TREC-6 includes 6 coarse question classes (abbreviation, entity, description, human, location, numeric), while TREC-50 expands these into 50 more fine-grained types. TREC question classification methods can be divided into those that learn the question classification task, and those that make use of either hand-crafted or semi-automated syntactic or semantic extraction rules to infer question classes. To date, the best reported accuracy for learned methods is 98.0% by Xia et al. BIBREF8 for TREC-6, and 91.6% by Van-tu et al. BIBREF24 for TREC-50. Madabushi et al. BIBREF7 achieve the highest to-date performance on TREC-50 at 97.2%, using rules that leverage the strong syntactic regularities in the short TREC factoid questions.
We compare the performance of BERT-QC with recently reported performance on this dataset in Table TABREF11 . BERT-QC achieves state-of-the-art performance on fine-grained classification (TREC-50) for a learned model at 92.0% accuracy, and near state-of-the-art performance on coarse classification (TREC-6) at 96.2% accuracy.
Because of the challenges with collecting biomedical questions, the datasets and classification taxonomies tend to be small, and rule-based methods often achieve strong results BIBREF45 . Roberts et al. BIBREF3 created the largest biomedical question classification dataset to date, annotating 2,937 consumer health questions drawn from the Genetic and Rare Diseases (GARD) question database with 13 question types, such as anatomy, disease cause, diagnosis, disease management, and prognoses. Roberts et al. BIBREF3 found these questions largely resistant to learning-based methods developed for TREC questions. Their best model (CPT2), shown in Table TABREF17 , makes use of stemming and lists of semantically related words and cue phrases to achieve 80.4% accuracy. BERT-QC reaches 84.9% accuracy on this dataset, an increase of +4.5% over the best previous model. We also compare performance on the recently released MLBioMedLAT dataset BIBREF4 , a multi-label biomedical question classification dataset with 780 questions labeled using 88 classification types drawn from 133 Unified Medical Language System (UMLS) categories. Table TABREF18 shows BERT-QC exceeds their best model, focus-driven semantic features (FDSF), by +0.05 Micro-F1 and +3% accuracy.
Error Analysis
We performed an error analysis on 50 ARC questions where the BERT-QC system did not predict the correct label, with a summary of major error categories listed in Table TABREF20 .
Associative Errors: In 35% of cases, predicted labels were nearly correct, differing from the correct label only by the finest-grained (leaf) element of the hierarchical label (for example, predicting Matter INLINEFORM0 Changes of State INLINEFORM1 Boiling instead of Matter INLINEFORM2 Changes of State INLINEFORM3 Freezing). The bulk of the remaining errors were due to questions containing highly correlated words with a different class, or classes themselves being highly correlated. For example, a specific question about Weather Models discusses “environments” changing over “millions of years”, where discussions of environments and long time periods tend to be associated with questions about Locations of Fossils. Similarly, a question containing the word “evaporation” could be primarily focused on either Changes of State or the Water Cycle (cloud generation), and must rely on knowledge from the entire question text to determine the correct problem domain. We believe these associative errors are addressable technical challenges that could ultimately lead to increased performance in subsequent models.
Errors specific to the multiple-choice domain: We observed that using both question and all multiple choice answer text produced large gains in question classification performance – for example, BERT-QC performance increases from 0.516 (question only) to 0.654 (question and all four answer candidates), an increase of 0.138 MAP. Our error analysis observed that while this substantially increases QC performance, it changes the distribution of errors made by the system. Specifically, 25% of errors become highly correlated with an incorrect answer candidate, which (we show in Section SECREF5 ) can reduce the performance of QA solvers.
Question Answering with QC Labels
Because of the challenges of errorful label predictions correlating with incorrect answers, it is difficult to determine the ultimate benefit a QA model might receive from reporting QC performance in isolation. Coupling QA and QC systems can often be laborious – either a large number of independent solvers targeted to specific question types must be constructed BIBREF46 , or an existing single model must be able to productively incorporate question classification information. Here we demostrate the latter – that a BERT QA model is able to incorporate question classification information through query expansion. BERT BIBREF43 recently demonstrated state-of-the-art performance on benchmark question answering datasets such as SQUaD BIBREF47 , and near human-level performance on SWAG BIBREF48 . Similarly, Pan et al. BIBREF49 demonstrated that BERT achieves the highest accuracy on the most challenging subset of ARC science questions. We make use of a BERT QA model using the same QA paradigm described by Pan et al. BIBREF49 , where QA is modeled as a next-sentence prediction task that predicts the likelihood of a given multiple choice answer candidate following the question text. We evaluate the question text and the text of each multiple choice answer candidate separately, where the answer candidate with the highest probablity is selected as the predicted answer for a given question. Performance is evaluated using Precision@1 BIBREF36 . Additional model details and hyperparameters are included in the Appendix.
We incorporate QC information into the QA process by implementing a variant of a query expansion model BIBREF50 . Specifically, for a given {question, QC_label} pair, we expand the question text by concatenating the definition text of the question classification label to the start of the question. We use of the top predicted question classification label for each question. Because QC labels are hierarchical, we append the label definition text for each level of the label INLINEFORM0 . An exampe of this process is shown in Table TABREF23 .
Figure FIGREF24 shows QA peformance using predicted labels from the BERT-QC model, compared to a baseline model that does not contain question classification information. As predicted by the error analysis, while a model trained with question and answer candidate text performs better at QC than a model using question text alone, a large proportion of the incorrect predictions become associated with a negative answer candidate, reducing overall QA performance, and highlighting the importance of evaluating QC and QA models together. When using BERT-QC trained on question text alone, at the finest level of specificity (L6) where overall question classification accuracy is 57.8% P@1, question classification significantly improves QA performance by +1.7% P@1 INLINEFORM0 . Using gold labels shows ceiling QA performance can reach +10.0% P@1 over baseline, demonstrating that as question classification model performance improves, substantial future gains are possible. An analysis of expected gains for a given level of QC performance is included in the Appendix, showing approximately linear gains in QA performance above baseline for QC systems able to achieve over 40% classification accuracy. Below this level, the decreased performance from noise induced by incorrect labels surpasses gains from correct labels.
Hyperparameters: Pilot experiments on both pre-trained BERT-Base and BERT-Large checkpoints showed similar performance benefits at the finest levels of question classification granularity (L6), but the BERT-Large model demonstrated higher overall baseline performance, and larger incremental benefits at lower levels of QC granularity, so we evaluated using that model. We lightly tuned hyperparameters on the development set surrounding those reported by Devlin et al. BIBREF43 , and ultimately settled on parameters similar to their original work, tempered by technical limitations in running the BERT-Large model on available hardware: maximum sequence length = 128, batch size = 16, learning rate: 1e-5. We report performance as the average of 10 runs for each datapoint. The number of epochs were tuned on each run on the development set (to a maximum of 8 epochs), where most models converged to maximum performance within 5 epochs.
Preference for uncorrelated errors in multiple choice question classification: We primarily report QA performance using BERT-QC trained using text from only the multiple choice questions and not their answer candidates. While this model achieved lower overall QC performance compared to the model trained with both question and multiple choice answer candidate text, it achieved slightly higher performance in the QA+QC setting. Our error analysis in Section SECREF21 shows that though models trained on both question and answer text can achieve higher QC performance, when they make QC errors, the errors tend to be highly correlated with an incorrect answer candidate, which can substantially reduce QA performance. This is an important result for question classification in the context of multiple choice exams.In the context of multiple choice exams, correlated noise can substantially reduce QA performance, meaning the kinds of errors that a model makes are important, and evaluating QC performance in context with QA models that make use of those QC systems is critical.
Related to this result, we provide an analysis of the noise sensitivity of the QA+QC model for different levels of question classification prediction accuracy. Here, we perturb gold question labels by randomly selecting a proportion of questions (between 5% and 40%) and randomly assigning that question a different label. Figure FIGREF36 shows that this uncorrelated noise provides roughly linear decreases in performance, and still shows moderate gains at 60% accuracy (40% noise) with uncorrelated noise. This suggests that when making errors, making random errors (that are not correlated to incorrect multiple choice answers) is preferential.
Training with predicted labels: We observed small gains when training the BERT-QA model with predicted QC labels. We generate predicted labels for the training set using 5-fold crossvalidation over only the training questions.
Statistics: We use non-parametric bootstrap resampling to compare baseline (no label) and experimental (QC labeled) runs of the QA+QC experiment. Because the BERT-QA model produces different performance values across successive runs, we perform 10 runs of each condition. We then compute pairwise p-values for each of the 10 no label and QC labeled runs (generating 100 comparisons), then use Fisher's method to combine these into a final statistic.
Question classification paired with question answering shows statistically significant gains of +1.7% P@1 at L6 using predicted labels, and a ceiling gain of up to +10% P@1 using gold labels. The QA performance graph in Figure FIGREF24 contains two deviations from the expectation of linear gains with increasing specificity, at L1 and L3. Region at INLINEFORM0 On gold labels, L3 provides small gains over L2, where as L4 provides large gains over L3. We hypothesize that this is because approximately 57% of question labels belong to the Earth Science or Life Science categories which have much more depth than breadth in the standardized science curriculum, and as such these categories are primarily differentiated from broad topics into detailed problem types at levels L4 through L6. Most other curriculum categories have more breadth than depth, and show strong (but not necessarily full) differentiation at L2. Region at INLINEFORM1 Predicted performance at L1 is higher than gold performance at L1. We hypothesize this is because we train using predicted rather than gold labels, which provides a boost in performance. Training on gold labels and testing on predicted labels substantially reduces the difference between gold and predicted performance.
Though initial raw interannotator agreement was measured at INLINEFORM0 , to maximize the quality of the annotation the annotators performed a second pass where all disagreements were manually resolved. Table TABREF30 shows question classification performance of the BERT-QC model at 57.8% P@1, meaning 42.2% of the predicted labels were different than the gold labels. The question classification error analysis in Table TABREF20 found that of these 42.2% of errorful predictions, 10% of errors (4.2% of total labels) were caused by the gold labels being incorrect. This allows us to estimate that the overall quality of the annotation (the proportion of questions that have a correct human authored label) is approximately 96%.
Automating Error Analyses with QC
Detailed error analyses for question answering systems are typically labor intensive, often requiring hours or days to perform manually. As a result error analyses are typically completed infrequently, in spite of their utility to key decisions in the algortithm or knowledge construction process. Here we show having access to detailed question classification labels specifying fine-grained problem domains provides a mechanism to automatically generate error analyses in seconds instead of days.
To illustrate the utility of this approach, Table TABREF26 shows the performance of the BERT QA+QC model broken down by specific question classes. This allows automatically identifying a given model's strengths – for example, here questions about Human Health, Material Properties, and Earth's Inner Core are well addressed by the BERT-QA model, and achieve well above the average QA performance of 49%. Similarly, areas of deficit include Changes of State, Reproduction, and Food Chain Processes questions, which see below-average QA performance. The lowest performing class, Safety Procedures, demonstrates that while this model has strong performance in many areas of scientific reasoning, it is worse than chance at answering questions about safety, and would be inappropriate to deploy for safety-critical tasks.
While this analysis is shown at an intermediate (L2) level of specificity for space, more detailed analyses are possible. For example, overall QA performance on Scientific Inference questions is near average (47%), but increasing granularity to L3 we observe that questions addressing Experiment Design or Making Inferences – challenging questions even for humans – perform poorly (33% and 20%) when answered by the QA system. This allows a system designer to intelligently target problem-specific knowledge resources and inference methods to address deficits in specific areas.
Conclusion
Question classification can enable targetting question answering models, but is challenging to implement with high performance without using rule-based methods. In this work we generate the most fine-grained challenge dataset for question classification, using complex and syntactically diverse questions, and show gains of up to 12% are possible with our question classification model across datasets in open, science, and medical domains. This model is the first demonstration of a question classification model achieving state-of-the-art results across benchmark datasets in open, science, and medical domains. We further demonstrate attending to question type can significantly improve question answering performance, with large gains possible as quesion classification performance improves. Our error analysis suggests that developing high-precision methods of question classification independent of their recall can offer the opportunity to incrementally make use of the benefits of question classification without suffering the consequences of classification errors on QA performance.
Resources
Our Appendix and supplementary material (available at http://www.cognitiveai.org/explanationbank/) includes data, code, experiment details, and negative results.
Acknowledgements
The authors wish to thank Elizabeth Wainwright and Stephen Marmorstein for piloting an earlier version of the question classification annotation. We thank the Allen Insitute for Artificial Intelligence and National Science Founation (NSF 1815948 to PJ) for funding this work.
Annotation
Classification Taxonomy: The full classification taxonomy is included in separate files, both coupled with definitions, and as a graphical visualization.
Annotation Procedure: Primary annotation took place over approximately 8 weeks. Annotators were instructed to provide up to 2 labels from the full classification taxonomy (462 labels) that were appropriate for each question, and to provide the most specific label available in the taxonomy for a given question. Of the 462 labels in the classification taxonomy, the ARC questions had non-zero counts in 406 question types. Rarely, questions were encountered by annotators that did not clearly fit into a label at the end of the taxonomy, and in these cases the annotators were instructed to choose a more generic label higher up the taxonomy that was appropriate. This occurred when the production taxonomy failed to have specific categories for infrequent questions testing knowledge that is not a standard part of the science curriculum. For example, the question:
Which material is the best natural resource to use for making water-resistant shoes? (A) cotton (B) leather (C) plastic (D) wool
tests a student's knowledge of the water resistance of different materials. Because this is not a standard part of the curriculum, and wasn't identified as a common topic in the training questions, the annotators tag this question as belonging to Matter INLINEFORM0 Properties of Materials, rather than a more specific category.
Questions from the training, development, and test sets were randomly shuffled to counterbalance any learning effects during the annotation procedure, but were presented in grade order (3rd to 9th grade) to reduce context switching (a given grade level tends to use a similar subset of the taxonomy – for example, 3rd grade questions generally do not address Chemical Equations or Newtons 1st Law of Motion).
Interannotator Agreement: To increase quality and consistency, each annotator annotated the entire dataset of 7,787 questions. Two annotators were used, with the lead annotator possessing previous professional domain expertise. Annotation proceeded in a two-stage process, where in stage 1 annotators completed their annotation independently, and in stage 2 each of the questions where the annotators did not have complete agreement were manually resolved by the annotators, resulting in high-quality classification annotation.
Because each question can have up to two labels, we treat each label for a given question as a separate evaluation of interannotator agreement. That is, for questions where both annotators labeled each question as having 1 or 2 labels, we treat this as 1 or 2 separate evaluations of interannotator agreement. For cases where one annotator labeled as question as having 1 label, and the other annotator labeled that same question as having 2 labels, we conservatively treat this as two separate interannotator agreements where one annotator failed to specify the second label and had zero agreement on that unspecified label.
Though the classification procedure was fine-grained compared to other question classification taxonomies, containing an unusually large number of classes (406), overall raw interannotator agreement before resolution was high (Cohen's INLINEFORM0 = 0.58). When labels are truncated to a maximum taxonomy depth of N, raw interannotator increases to INLINEFORM1 = 0.85 at the coarsest (9 class) level (see Table TABREF28 ). This is considered moderate to strong agreement (see McHugh BIBREF32 for a discussion of the interpretation of the Kappa statistic). Based on the results of an error analysis on the question classification system (see Section UID38 ), we estimate that the overall accuracy of the question classification labels after resolution is approximately 96% .
Annotators disagreed on 3441 (44.2%) of questions. Primary sources of disagreement before resolution included each annotator choosing a single category for questions requiring multiple labels (e.g. annotator 1 assigning a label of X, and annotator 2 assigning a label of Y, when the gold label was multilabel X, Y), which was observed in 18% of disagreements. Similarly, we observed annotators choosing similar labels but at different levels of specificity in the taxonomy (e.g. annotator 1 assigning a label of Matter INLINEFORM0 Changes of State INLINEFORM1 Boiling, where annotator 2 assigned Matter INLINEFORM2 Changes of State), which occurred in 12% of disagreements before resolution.
Question Classification
Because of space limitations the question classification results are reported in Table TABREF7 only using Mean Average Precision (MAP). We also include Precision@1 (P@1), the overall accuracy of the highest-ranked prediction for each question classification model, in Table TABREF30 .
CNN: We implemented the CNN sentence classifier of Kim BIBREF28 , which demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. We adapted the original CNN non-static model to multi-label classification by changing the fully connected softmax layer to sigmoid layer to produce a sigmoid output for each label simultaneously. We followed the same parameter settings reported by Kim et al. except the learning rate, which was tuned based on the development set. Pilot experiments did not show a performance improvement over the baseline model.
Label Definitions: Question terms can be mapped to categories using manual heuristics BIBREF19 . To mitigate sparsity and limit heuristic use, here we generated a feature comparing the cosine similarity of composite embedding vectors BIBREF51 representing question text and category definition text, using pretrained GloVe embeddings BIBREF52 . Pilot experiments showed that performance did not significantly improve.
Question Expansion with Hypernyms (Probase Version): One of the challenges of hypernym expansion BIBREF22 , BIBREF19 , BIBREF3 is determining a heuristic for the termination depth of hypernym expansion, as in Van-tu et al. BIBREF24 . Because science exam questions are often grounded in specific examples (e.g. a car rolling down a hill coming to a stop due to friction), we hypothesized that knowing certain categories of entities can be important for identifying specific question types – for example, observing that a question contains a kind of animal may be suggestive of a Life Science question, where similarly vehicles or materials present in questions may suggest questions about Forces or Matter, respectively. The challenge with WordNet is that key hypernyms can be at very different depths from query terms – for example, “cat” is distance 10 away from living thing, “car” is distance 4 away from vehicle, and “copper” is distance 2 away from material. Choosing a static threshold (or decaying threshold, as in Van-tu et al. BIBREF24 ) will inheriently reduce recall and limit the utility of this method of query expansion.
To address this, we piloted a hypernym expansion experiment using the Probase taxonomy BIBREF53 , a collection of 20.7M is-a pairs mined from the web, in place of WordNet. Because the taxonomic pairs in Probase come from use in naturalistic settings, links tend to jump levels in the WordNet taxonomy and be expressed in common forms. For example, INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , are each distance 1 in the Probase taxonomy, and high-frequency (i.e. high-confidence) taxonomic pairs.
Similar to query expansion using WordNet Hypernyms, our pilot experiments did not observe a benefit to using Probase hypernyms over the baseline model. An error analysis suggested that the large number of noisy and out-of-context links present in Probase may have reduced performance, and in response we constructed a filtered list of 710 key hypernym categories manually filtered from a list of hypernyms seeded using high-frequency words from an in-house corpus of 250 in-domain science textbooks. We also did not observe a benefit to question classification over the baseline model when expanding only to this manually curated list of key hypernyms.
Topic words: We made use of the 77 TREC word lists of Li and Roth BIBREF6 , containing a total of 3,257 terms, as well as an in-house set of 144 word lists on general and elementary science topics mined from the web, such as ANIMALS, VEGETABLES, and VEHICLES, containing a total of 29,059 words. To mitigate sparsity, features take the form of counts for a specific topic – detecting the words turtle and giraffe in a question would provide a count of 2 for the ANIMAL feature. This provides a light form of domain-specific entity and action (e.g. types of changes) recognition. Pilot experiments showed that this wordlist feature did add a modest performance benefit of approximately 2% to question classification accuracy. Taken together with our results on hypernym expansion, this suggests that manually curated wordlists can show modest benefits for question classification performance, but at the expense of substantial effort in authoring or collecting these extensive wordlists.
Hyperparameters: For each layer of the class label hierarchy, we tune the hyperparameters based on the development set. We use the pretrained BERT-Base (uncased) checkpoint. We use the following hyperparameters: maximum sequence length = 256, batch size = 16, learning rates: 2e-5 (L1), 5e-5 (L2-L6), epochs: 5 (L1), 25 (L2-L6).
Statistics: We use non-parametric bootstrap resampling to compare the baseline (Li and Roth BIBREF6 model) to all experimental models to determine significance, using 10,000 bootstrap resamples.
|
What previous methods is their model compared to?
|
bag-of-words model, CNN
| 5,838
|
qasper
|
8k
|
Introduction
Data annotation is a major bottleneck for the application of supervised learning approaches to many problems. As a result, unsupervised methods that learn directly from unlabeled data are increasingly important. For tasks related to unsupervised syntactic analysis, discrete generative models have dominated in recent years – for example, for both part-of-speech (POS) induction BIBREF0 , BIBREF1 and unsupervised dependency parsing BIBREF2 , BIBREF3 , BIBREF4 . While similar models have had success on a range of unsupervised tasks, they have mostly ignored the apparent utility of continuous word representations evident from supervised NLP applications BIBREF5 , BIBREF6 . In this work, we focus on leveraging and explicitly representing continuous word embeddings within unsupervised models of syntactic structure.
Pre-trained word embeddings from massive unlabeled corpora offer a compact way of injecting a prior notion of word similarity into models that would otherwise treat words as discrete, isolated categories. However, the specific properties of language captured by any particular embedding scheme can be difficult to control, and, further, may not be ideally suited to the task at hand. For example, pre-trained skip-gram embeddings BIBREF7 with small context window size are found to capture the syntactic properties of language well BIBREF8 , BIBREF9 . However, if our goal is to separate syntactic categories, this embedding space is not ideal – POS categories correspond to overlapping interspersed regions in the embedding space, evident in Figure SECREF4 .
In our approach, we propose to learn a new latent embedding space as a projection of pre-trained embeddings (depicted in Figure SECREF5 ), while jointly learning latent syntactic structure – for example, POS categories or syntactic dependencies. To this end, we introduce a new generative model (shown in Figure FIGREF6 ) that first generates a latent syntactic representation (e.g. a dependency parse) from a discrete structured prior (which we also call the “syntax model”), then, conditioned on this representation, generates a sequence of latent embedding random variables corresponding to each word, and finally produces the observed (pre-trained) word embeddings by projecting these latent vectors through a parameterized non-linear function. The latent embeddings can be jointly learned with the structured syntax model in a completely unsupervised fashion.
By choosing an invertible neural network as our non-linear projector, and then parameterizing our model in terms of the projection's inverse, we are able to derive tractable exact inference and marginal likelihood computation procedures so long as inference is tractable in the underlying syntax model. In sec:learn-with-inv we show that this derivation corresponds to an alternate view of our approach whereby we jointly learn a mapping of observed word embeddings to a new embedding space that is more suitable for the syntax model, but include an additional Jacobian regularization term to prevent information loss.
Recent work has sought to take advantage of word embeddings in unsupervised generative models with alternate approaches BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . BIBREF9 build an HMM with Gaussian emissions on observed word embeddings, but they do not attempt to learn new embeddings. BIBREF10 , BIBREF11 , and BIBREF12 extend HMM or dependency model with valence (DMV) BIBREF2 with multinomials that use word (or tag) embeddings in their parameterization. However, they do not represent the embeddings as latent variables.
In experiments, we instantiate our approach using both a Markov-structured syntax model and a tree-structured syntax model – specifically, the DMV. We evaluate on two tasks: part-of-speech (POS) induction and unsupervised dependency parsing without gold POS tags. Experimental results on the Penn Treebank BIBREF13 demonstrate that our approach improves the basic HMM and DMV by a large margin, leading to the state-of-the-art results on POS induction, and state-of-the-art results on unsupervised dependency parsing in the difficult training scenario where neither gold POS annotation nor punctuation-based constraints are available.
Model
As an illustrative example, we first present a baseline model for Markov syntactic structure (POS induction) that treats a sequence of pre-trained word embeddings as observations. Then, we propose our novel approach, again using Markov structure, that introduces latent word embedding variables and a neural projector. Lastly, we extend our approach to more general syntactic structures.
Example: Gaussian HMM
We start by describing the Gaussian hidden Markov model introduced by BIBREF9 , which is a locally normalized model with multinomial transitions and Gaussian emissions. Given a sentence of length INLINEFORM0 , we denote the latent POS tags as INLINEFORM1 , observed (pre-trained) word embeddings as INLINEFORM2 , transition parameters as INLINEFORM3 , and Gaussian emission parameters as INLINEFORM4 . The joint distribution of data and latent variables factors as:
DISPLAYFORM0
where INLINEFORM0 is the multinomial transition probability and INLINEFORM1 is the multivariate Gaussian emission probability.
While the observed word embeddings do inform this model with a notion of word similarity – lacking in the basic multinomial HMM – the Gaussian emissions may not be sufficiently flexible to separate some syntactic categories in the complex pre-trained embedding space – for example the skip-gram embedding space as visualized in Figure SECREF4 where different POS categories overlap. Next we introduce a new approach that adds flexibility to the emission distribution by incorporating new latent embedding variables.
Markov Structure with Neural Projector
To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model, we propose to cascade a neural network as a projection function, deterministically transforming the simple space defined by the Gaussian HMM to the observed embedding space. We denote the latent embedding of the INLINEFORM0 word in a sentence as INLINEFORM1 , and the neural projection function as INLINEFORM2 , parameterized by INLINEFORM3 . In the case of sequential Markov structure, our new model corresponds to the following generative process:
For each time step INLINEFORM0 ,
[noitemsep, leftmargin=*]
Draw the latent state INLINEFORM0
Draw the latent embedding INLINEFORM0
Deterministically produce embedding
INLINEFORM0
The graphical model is depicted in Figure FIGREF6 . The deterministic projection can also be viewed as sampling each observation from a point mass at INLINEFORM0 . The joint distribution of our model is: DISPLAYFORM0
where INLINEFORM0 is a conditional Gaussian distribution, and INLINEFORM1 is the Dirac delta function centered at INLINEFORM2 : DISPLAYFORM0
General Structure with Neural Projector
Our approach can be applied to a broad family of structured syntax models. We denote latent embedding variables as INLINEFORM0 , discrete latent variables in the syntax model as INLINEFORM1 ( INLINEFORM2 ), where INLINEFORM3 are conditioned to generate INLINEFORM4 . The joint probability of our model factors as:
DISPLAYFORM0
where INLINEFORM0 represents the probability of the syntax model, and can encode any syntactic structure – though, its factorization structure will determine whether inference is tractable in our full model. As shown in Figure FIGREF6 , we focus on two syntax models for syntactic analysis in this paper. The first is Markov-structured, which we use for POS induction, and the second is DMV-structured, which we use to learn dependency parses without supervision.
The marginal data likelihood of our model is: DISPLAYFORM0
While the discrete variables INLINEFORM0 can be marginalized out with dynamic program in many cases, it is generally intractable to marginalize out the latent continuous variables, INLINEFORM1 , for an arbitrary projection INLINEFORM2 in Eq. ( EQREF17 ), which means inference and learning may be difficult. In sec:opt, we address this issue by constraining INLINEFORM3 to be invertible, and show that this constraint enables tractable exact inference and marginal likelihood computation.
Learning & Inference
In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation.
Learning with Invertibility
For ease of exposition, we explain the learning algorithm in terms of Markov structure without loss of generality. As shown in Eq. ( EQREF17 ), the optimization challenge in our approach comes from the intractability of the marginalized emission factor INLINEFORM0 . If we can marginalize out INLINEFORM1 and compute INLINEFORM2 , then the posterior and marginal likelihood of our Markov-structured model can be computed with the forward-backward algorithm. We can apply Eq. ( EQREF14 ) and obtain : INLINEFORM3
By using the change of variable rule to the integration, which allows the integration variable INLINEFORM0 to be replaced by INLINEFORM1 , the marginal emission factor can be computed in closed-form when the invertibility condition is satisfied: DISPLAYFORM0
where INLINEFORM0 is a conditional Gaussian distribution, INLINEFORM1 is the Jacobian matrix of function INLINEFORM2 at INLINEFORM3 , and INLINEFORM4 represents the absolute value of its determinant. This Jacobian term is nonzero and differentiable if and only if INLINEFORM5 exists.
Eq. ( EQREF19 ) shows that we can directly calculate the marginal emission distribution INLINEFORM0 . Denote the marginal data likelihood of Gaussian HMM as INLINEFORM1 , then the log marginal data likelihood of our model can be directly written as: DISPLAYFORM0
where INLINEFORM0 represents the new sequence of embeddings after applying INLINEFORM1 to each INLINEFORM2 . Eq. ( EQREF20 ) shows that the training objective of our model is simply the Gaussian HMM log likelihood with an additional Jacobian regularization term. From this view, our approach can be seen as equivalent to reversely projecting the data through INLINEFORM3 to another manifold INLINEFORM4 that is directly modeled by the Gaussian HMM, with a regularization term. Intuitively, we optimize the reverse projection INLINEFORM5 to modify the INLINEFORM6 space, making it more appropriate for the syntax model. The Jacobian regularization term accounts for the volume expansion or contraction behavior of the projection. Maximizing it can be thought of as preventing information loss. In the extreme case, the Jacobian determinant is equal to zero, which means the projection is non-invertible and thus information is being lost through the projection. Such “information preserving” regularization is crucial during optimization, otherwise the trivial solution of always projecting data to the same single point to maximize likelihood is viable.
More generally, for an arbitrary syntax model the data likelihood of our approach is: DISPLAYFORM0
If the syntax model itself allows for tractable inference and marginal likelihood computation, the same dynamic program can be used to marginalize out INLINEFORM0 . Therefore, our joint model inherits the tractability of the underlying syntax model.
Invertible Volume-Preserving Neural Net
For the projection we can use an arbitrary invertible function, and given the representational power of neural networks they seem a natural choice. However, calculating the inverse and Jacobian of an arbitrary neural network can be difficult, as it requires that all component functions be invertible and also requires storage of large Jacobian matrices, which is memory intensive. To address this issue, several recent papers propose specially designed invertible networks that are easily trainable yet still powerful BIBREF16 , BIBREF17 , BIBREF19 . Inspired by these works, we use the invertible transformation proposed by BIBREF16 , which consists of a series of “coupling layers”. This architecture is specially designed to guarantee a unit Jacobian determinant (and thus the invertibility property).
From Eq. ( EQREF22 ) we know that only INLINEFORM0 is required for accomplishing learning and inference; we never need to explicitly construct INLINEFORM1 . Thus, we directly define the architecture of INLINEFORM2 . As shown in Figure FIGREF24 , the nonlinear transformation from the observed embedding INLINEFORM3 to INLINEFORM4 represents the first coupling layer. The input in this layer is partitioned into left and right halves of dimensions, INLINEFORM5 and INLINEFORM6 , respectively. A single coupling layer is defined as: DISPLAYFORM0
where INLINEFORM0 is the coupling function and can be any nonlinear form. This transformation satisfies INLINEFORM1 , and BIBREF16 show that its Jacobian matrix is triangular with all ones on the main diagonal. Thus the Jacobian determinant is always equal to one (i.e. volume-preserving) and the invertibility condition is naturally satisfied.
To be sufficiently expressive, we compose multiple coupling layers as suggested in BIBREF16 . Specifically, we exchange the role of left and right half vectors at each layer as shown in Figure FIGREF24 . For instance, from INLINEFORM0 to INLINEFORM1 the left subset INLINEFORM2 is unchanged, while from INLINEFORM3 to INLINEFORM4 the right subset INLINEFORM5 remains the same. Also note that composing multiple coupling layers does not change the volume-preserving and invertibility properties. Such a sequence of invertible transformations from the data space INLINEFORM6 to INLINEFORM7 is also called normalizing flow BIBREF20 .
Experiments
In this section, we first describe our datasets and experimental setup. We then instantiate our approach with Markov and DMV-structured syntax models, and report results on POS tagging and dependency grammar induction respectively. Lastly, we analyze the learned latent embeddings.
Data
For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus.
General Experimental Setup
For the neural projector, we employ rectified networks as coupling function INLINEFORM0 following BIBREF16 . We use a rectified network with an input layer, one hidden layer, and linear output units, the number of hidden units is set to the same as the number of input units. The number of coupling layers are varied as 4, 8, 16 for both tasks. We optimize marginal data likelihood directly using Adam BIBREF22 . For both tasks in the fully unsupervised setting, we do not tune the hyper-parameters using supervised data.
Unsupervised POS tagging
For unsupervised POS tagging, we use a Markov-structured syntax model in our approach, which is a popular structure for unsupervised tagging tasks BIBREF9 , BIBREF10 .
Following existing literature, we train and test on the entire WSJ corpus (49208 sentences, 1M tokens). We use 45 tag clusters, the number of POS tags that appear in WSJ corpus. We train the discrete HMM and the Gaussian HMM BIBREF9 as baselines. For the Gaussian HMM, mean vectors of Gaussian emissions are initialized with the empirical mean of all word vectors with an additive noise. We assume diagonal covariance matrix for INLINEFORM0 and initialize it with the empirical variance of the word vectors. Following BIBREF9 , the covariance matrix is fixed during training. The multinomial probabilities are initialized as INLINEFORM1 , where INLINEFORM2 . For our approach, we initialize the syntax model and Gaussian parameters with the pre-trained Gaussian HMM. The weights of layers in the rectified network are initialized from a uniform distribution with mean zero and a standard deviation of INLINEFORM3 , where INLINEFORM4 is the input dimension. We evaluate the performance of POS tagging with both Many-to-One (M-1) accuracy BIBREF23 and V-Measure (VM) BIBREF24 . Given a model we found that the tagging performance is well-correlated with the training data likelihood, thus we use training data likelihood as a unsupervised criterion to select the trained model over 10 random restarts after training 50 epochs. We repeat this process 5 times and report the mean and standard deviation of performance.
We compare our approach with basic HMM, Gaussian HMM, and several state-of-the-art systems, including sophisticated HMM variants and clustering techniques with hand-engineered features. The results are presented in Table TABREF32 . Through the introduced latent embeddings and additional neural projection, our approach improves over the Gaussian HMM by 5.4 points in M-1 and 5.6 points in VM. Neural HMM (NHMM) BIBREF10 is a baseline that also learns word representation jointly. Both their basic model and extended Conv version does not outperform the Gaussian HMM. Their best model incorporates another LSTM to model long distance dependency and breaks the Markov assumption, yet our approach still achieves substantial improvement over it without considering more context information. Moreover, our method outperforms the best published result that benefits from hand-engineered features BIBREF27 by 2.0 points on VM.
We found that most tagging errors happen in noun subcategories. Therefore, we do the one-to-one mapping between gold POS tags and induced clusters and plot the normalized confusion matrix of noun subcategories in Figure FIGREF35 . The Gaussian HMM fails to identify “NN” and “NNS” correctly for most cases, and it often recognizes “NNPS” as “NNP”. In contrast, our approach corrects these errors well.
Unsupervised Dependency Parsing without gold POS tags
For the task of unsupervised dependency parse induction, we employ the Dependency Model with Valence (DMV) BIBREF2 as the syntax model in our approach. DMV is a generative model that defines a probability distribution over dependency parse trees and syntactic categories, generating tokens and dependencies in a head-outward fashion. While, traditionally, DMV is trained using gold POS tags as observed syntactic categories, in our approach, we treat each tag as a latent variable, as described in sec:general-neural.
Most existing approaches to this task are not fully unsupervised since they rely on gold POS tags following the original experimental setup for DMV. This is partially because automatically parsing from words is difficult even when using unsupervised syntactic categories BIBREF29 . However, inducing dependencies from words alone represents a more realistic experimental condition since gold POS tags are often unavailable in practice. Previous work that has trained from words alone often requires additional linguistic constraints (like sentence internal boundaries) BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , acoustic cues BIBREF33 , additional training data BIBREF4 , or annotated data from related languages BIBREF34 . Our approach is naturally designed to train on word embeddings directly, thus we attempt to induce dependencies without using gold POS tags or other extra linguistic information.
Like previous work we use sections 02-21 of WSJ corpus as training data and evaluate on section 23, we remove punctuations and train the models on sentences of length INLINEFORM0 , “head-percolation” rules BIBREF39 are applied to obtain gold dependencies for evaluation. We train basic DMV, extended DMV (E-DMV) BIBREF35 and Gaussian DMV (which treats POS tag as unknown latent variables and generates observed word embeddings directly conditioned on them following Gaussian distribution) as baselines. Basic DMV and E-DMV are trained with Viterbi EM BIBREF40 on unsupervised POS tags induced from our Markov-structured model described in sec:pos. Multinomial parameters of the syntax model in both Gaussian DMV and our model are initialized with the pre-trained DMV baseline. Other parameters are initialized in the same way as in the POS tagging experiment. The directed dependency accuracy (DDA) is used for evaluation and we report accuracy on sentences of length INLINEFORM1 and all lengths. We train the parser until training data likelihood converges, and report the mean and standard deviation over 20 random restarts.
Our model directly observes word embeddings and does not require gold POS tags during training. Thus, results from related work trained on gold tags are not directly comparable. However, to measure how these systems might perform without gold tags, we run three recent state-of-the-art systems in our experimental setting: UR-A E-DMV BIBREF36 , Neural E-DMV BIBREF11 , and CRF Autoencoder (CRFAE) BIBREF37 . We use unsupervised POS tags (induced from our Markov-structured model) in place of gold tags. We also train basic DMV on gold tags and include several state-of-the-art results on gold tags as reference points.
As shown in Table TABREF39 , our approach is able to improve over the Gaussian DMV by 4.8 points on length INLINEFORM0 and 4.8 points on all lengths, which suggests the additional latent embedding layer and neural projector are helpful. The proposed approach yields, to the best of our knowledge, state-of-the-art performance without gold POS annotation and without sentence-internal boundary information. DMV, UR-A E-DMV, Neural E-DMV, and CRFAE suffer a large decrease in performance when trained on unsupervised tags – an effect also seen in previous work BIBREF29 , BIBREF34 . Since our approach induces latent POS tags jointly with dependency trees, it may be able to learn POS clusters that are more amenable to grammar induction than the unsupervised tags. We observe that CRFAE underperforms its gold-tag counterpart substantially. This may largely be a result of the model's reliance on prior linguistic rules that become unavailable when gold POS tag types are unknown. Many extensions to DMV can be considered orthogonal to our approach – they essentially focus on improving the syntax model. It is possible that incorporating these more sophisticated syntax models into our approach may lead to further improvements.
Sensitivity Analysis
In the above experiments we initialize the structured syntax components with the pre-trained Gaussian or discrete baseline, which is shown as a useful technique to help train our deep models. We further study the results with fully random initialization. In the POS tagging experiment, we report the results in Table TABREF48 . While the performance with 4 layers is comparable to the pre-trained Gaussian initialization, deeper projections (8 or 16 layers) result in a dramatic drop in performance. This suggests that the structured syntax model with very deep projections is difficult to train from scratch, and a simpler projection might be a good compromise in the random initialization setting.
Different from the Markov prior in POS tagging experiments, our parsing model seems to be quite sensitive to the initialization. For example, directed accuracy of our approach on sentences of length INLINEFORM0 is below 40.0 with random initialization. This is consistent with previous work that has noted the importance of careful initialization for DMV-based models such as the commonly used harmonic initializer BIBREF2 . However, it is not straightforward to apply the harmonic initializer for DMV directly in our model without using some kind of pre-training since we do not observe gold POS.
We investigate the effect of the choice of pre-trained embedding on performance while using our approach. To this end, we additionally include results using fastText embeddings BIBREF41 – which, in contrast with skip-gram embeddings, include character-level information. We set the context windows size to 1 and the dimension size to 100 as in the skip-gram training, while keeping other parameters set to their defaults. These results are summarized in Table TABREF50 and Table TABREF51 . While fastText embeddings lead to reduced performance with our model, our approach still yields an improvement over the Gaussian baseline with the new observed embeddings space.
Qualitative Analysis of Embeddings
We perform qualitative analysis to understand how the latent embeddings help induce syntactic structures. First we filter out low-frequency words and punctuations in WSJ, and visualize the rest words (10k) with t-SNE BIBREF42 under different embeddings. We assign each word with its most likely gold POS tags in WSJ and color them according to the gold POS tags.
For our Markov-structured model, we have displayed the embedding space in Figure SECREF5 , where the gold POS clusters are well-formed. Further, we present five example target words and their five nearest neighbors in terms of cosine similarity. As shown in Table TABREF53 , the skip-gram embedding captures both semantic and syntactic aspects to some degree, yet our embeddings are able to focus especially on the syntactic aspects of words, in an unsupervised fashion without using any extra morphological information.
In Figure FIGREF54 we depict the learned latent embeddings with the DMV-structured syntax model. Unlike the Markov structure, the DMV structure maps a large subset of singular and plural nouns to the same overlapping region. However, two clusters of singular and plural nouns are actually separated. We inspect the two clusters and the overlapping region in Figure FIGREF54 , it turns out that the nouns in the separated clusters are words that can appear as subjects and, therefore, for which verb agreement is important to model. In contrast, the nouns in the overlapping region are typically objects. This demonstrates that the latent embeddings are focusing on aspects of language that are specifically important for modeling dependency without ever having seen examples of dependency parses. Some previous work has deliberately created embeddings to capture different notions of similarity BIBREF43 , BIBREF44 , while they use extra morphology or dependency annotations to guide the embedding learning, our approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora.
Related Work
Our approach is related to flow-based generative models, which are first described in NICE BIBREF16 and have recently received more attention BIBREF17 , BIBREF19 , BIBREF18 . This relevant work mostly adopts simple (e.g. Gaussian) and fixed priors and does not attempt to learn interpretable latent structures. Another related generative model class is variational auto-encoders (VAEs) BIBREF45 that optimize a lower bound on the marginal data likelihood, and can be extended to learn latent structures BIBREF46 , BIBREF47 . Against the flow-based models, VAEs remove the invertibility constraint but sacrifice the merits of exact inference and exact log likelihood computation, which potentially results in optimization challenges BIBREF48 . Our approach can also be viewed in connection with generative adversarial networks (GANs) BIBREF49 that is a likelihood-free framework to learn implicit generative models. However, it is non-trivial for a gradient-based method like GANs to propagate gradients through discrete structures.
Conclusion
In this work, we define a novel generative approach to leverage continuous word representations for unsupervised learning of syntactic structure. Experiments on both POS induction and unsupervised dependency parsing tasks demonstrate the effectiveness of our proposed approach. Future work might explore more sophisticated invertible projections, or recurrent projections that jointly transform the entire input sequence.
|
Do they evaluate only on English datasets?
|
Yes
| 4,320
|
qasper
|
8k
|
Introduction
BioASQ is a biomedical document classification, document retrieval, and question answering competition, currently in its seventh year. We provide an overview of our submissions to semantic question answering task (7b, Phase B) of BioASQ 7 (except for 'ideal answer' test, in which we did not participate this year). In this task systems are provided with biomedical questions and are required to submit ideal and exact answers to those questions. We have used BioBERT BIBREF0 based system , see also Bidirectional Encoder Representations from Transformers(BERT) BIBREF1, and we fine tuned it for the biomedical question answering task. Our system scored near the top for factoid questions for all the batches of the challenge. More specifially, in the third test batch set, our system achieved highest ‘MRR’ score for Factoid Question Answering task. Also, for List-type question answering task our system achieved highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions and also highlight identified downsides for our current approach and ways to improve them in our future experiments. In last test batch results we placed 4th for List-type questions and 3rd for Factoid-type questions.)
The QA task is organized in two phases. Phase A deals with retrieval of the relevant document, snippets, concepts, and RDF triples, and phase B deals with exact and ideal answer generations (which is a paragraph size summary of snippets). Exact answer generation is required for factoid, list, and yes/no type question.
BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure.
Related Work ::: BioAsq
Sharma et al. BIBREF3 describe a system with two stage process for factoid and list type question answering. Their system extracts relevant entities and then runs supervised classifier to rank the entities. Wiese et al. BIBREF4 propose neural network based model for Factoid and List-type question answering task. The model is based on Fast QA and predicts the answer span in the passage for a given question. The model is trained on SQuAD data set and fine tuned on the BioASQ data. Dimitriadis et al. BIBREF5 proposed two stage process for Factoid question answering task. Their system uses general purpose tools such as Metamap, BeCas to identify candidate sentences. These candidate sentences are represented in the form of features, and are then ranked by the binary classifier. Classifier is trained on candidate sentences extracted from relevant questions, snippets and correct answers from BioASQ challenge. For factoid question answering task highest ‘MRR’ achieved in the 6th edition of BioASQ competition is ‘0.4325’. Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a ‘MRR’ score ‘0.6103’ in one of the test batches for Factoid Question Answering task.
Related Work ::: A minimum background on BERT
BERT stands for "Bidirectional Encoder Representations from Transformers" BIBREF1 is a contextual word embedding model. Given a sentence as an input, contextual embedding for the words are returned. The BERT model was designed so it can be fine tuned for 11 different tasks BIBREF1, including question answering tasks. For a question answering task, question and paragraph (context) are given as an input. A BERT standard is that question text and paragraph text are separated by a separator [Sep]. BERT question-answering fine tuning involves adding softmax layer. Softmax layer takes contextual word embeddings from BERT as input and learns to identity answer span present in the paragraph (context). This process is represented in Figure FIGREF4.
BERT was originally trained to perform tasks such as language model creation using masked words and next-sentence-prediction. In other words BERT weights are learned such that context is used in building the representation of the word, not just as a loss function to help learn a context-independent representation. For detailed understanding of BERT Architecture, please refer to the original BERT paper BIBREF1.
Related Work ::: A minimum background on BERT ::: Comparison of Word Embeddings and Contextual Word Embeddings
A ‘word embedding’ is a learned representation. It is represented in the form of vector where words that have the same meaning have a similar vector representation. Consider a word embedding model 'word2vec' BIBREF6 trained on a corpus. Word embeddings generated from the model are context independent that is, word embeddings are returned regardless of where the words appear in a sentence and regardless of e.g. the sentiment of the sentence. However, contextual word embedding models like BERT also takes context of the word into consideration.
Related Work ::: Comparison of BERT and Bio-BERT
‘BERT’ and BioBERT are very similar in terms of architecture. Difference is that ‘BERT’ is pretrained on Wikipedia articles, whereas BioBERT version used in our experiments is pretrained on Wikipedia, PMC and PubMed articles. Therefore BioBERT model is expected to perform well with biomedical text, in terms of generating contextual word embeddings.
BioBERT model used in our experiments is based on BERT-Base Architecture; BERT-Base has 12 transformer Layers where as BERT-Large has 24 transformer layers. Moreover contextual word embedding vector size is 768 for BERT-Base and more for BERT-large. According to BIBREF1 Bert-Large, fine-tuned on SQuAD 1.1 question answering data BIBREF7 can achieve F1 Score of 90.9 for Question Answering task where as BERT-Base Fine-tuned on the same SQuAD question answering BIBREF7 data could achieve F1 score of 88.5. One downside of the current version BioBERT is that word-piece vocabulary is the same as that of original BERT Model, as a result word-piece vocabulary does not include biomedical jargon. Lee et al. BIBREF0 created BioBERT, using the same pre-trained BERT released by Google, and hence in the word-piece vocabulary (vocab.txt), as a result biomedical jargon is not included in word-piece vocabulary. Modifying word-piece vocabulary (vocab.txt) at this stage would loose original compatibility with ‘BERT’, hence it is left unmodified.
In our future work we would like to build pre-trained ‘BERT’ model from scratch. We would pretrain the model with biomedical corpus (PubMed, ‘PMC’) and Wikipedia. Doing so would give us scope to create word piece vocabulary to include biomedical jargon and there are chances of model performing better with biomedical jargon being included in the word piece vocabulary. We will consider this scenario in the future, or wait for the next version of BioBERT.
Experiments: Factoid Question Answering Task
For Factoid Question Answering task, we fine tuned BioBERT BIBREF0 with question answering data and added new features. Fig. FIGREF4 shows the architecture of BioBERT fine tuned for question answering tasks: Input to BioBERT is word tokenized embeddings for question and the paragraph (Context). As per the ‘BERT’ BIBREF1 standards, tokens ‘[CLS]’ and ‘[SEP]’ are appended to the tokenized input as illustrated in the figure. The resulting model has a softmax layer formed for predicting answer span indices in the given paragraph (Context). On test data, the fine tuned model generates $n$-best predictions for each question. For a question, $n$-best corresponds that $n$ answers are returned as possible answers in the decreasing order of confidence. Variable $n$ is configurable. In our paper, any further mentions of ‘answer returned by the model’ correspond to the top answer returned by the model.
Experiments: Factoid Question Answering Task ::: Setup
BioASQ provides the training data. This data is based on previous BioASQ competitions. Train data we have considered is aggregate of all train data sets till the 5th version of BioASQ competition. We cleaned the data, that is, question-answering data without answers are removed and left with a total count of ‘530’ question answers. The data is split into train and test data in the ratio of 94 to 6; that is, count of '495' for training and '35' for testing.
The original data format is converted to the BERT/BioBERT format, where BioBERT expects ‘start_index’ of the actual answer. The ‘start_index corresponds to the index of the answer text present in the paragraph/ Context. For finding ‘start_index’ we used built-in python function find(). The function returns the lowest index of the actual answer present in the context(paragraph). If the answer is not found ‘-1’ is returned as the index. The efficient way of finding start_index is, if the paragraph (Context) has multiple instances of answer text, then ‘start_index’ of the answer should be that instance of answer text whose context actually matches with what’s been asked in the question.
Example (Question, Answer and Paragraph from BIBREF8):
Question: Which drug should be used as an antidote in benzodiazepine overdose?
Answer: 'Flumazenil'
Paragraph(context):
"Flumazenil use in benzodiazepine overdose in the UK: a retrospective survey of NPIS data. OBJECTIVE: Benzodiazepine (BZD) overdose (OD) continues to cause significant morbidity and mortality in the UK. Flumazenil is an effective antidote but there is a risk of seizures, particularly in those who have co-ingested tricyclic antidepressants. A study was undertaken to examine the frequency of use, safety and efficacy of flumazenil in the management of BZD OD in the UK. METHODS: A 2-year retrospective cohort study was performed of all enquiries to the UK National Poisons Information Service involving BZD OD. RESULTS: Flumazenil was administered to 80 patients in 4504 BZD-related enquiries, 68 of whom did not have ventilatory failure or had recognised contraindications to flumazenil. Factors associated with flumazenil use were increased age, severe poisoning and ventilatory failure. Co-ingestion of tricyclic antidepressants and chronic obstructive pulmonary disease did not influence flumazenil administration. Seizure frequency in patients not treated with flumazenil was 0.3%".
Actual answer is 'Flumazenil', but there are multiple instances of word 'Flu-mazenil'. Efficient way to identify the start-index for 'Flumazenil'(answer) is to find that particular instance of the word 'Flumazenil' which matches the context for the question. In the above example 'Flumazenil' highlighted in bold is the actual instance that matches question's context. Unfortunately, we could not identify readily available tools that can achieve this goal. In our future work, we look forward to handling these scenarios effectively.
Note: The creators of 'SQuAD' BIBREF7 have handled the task of identifying answer's start_index effectively. But 'SQuAD' data set is much more general and does not include biomedical question answering data.
Experiments: Factoid Question Answering Task ::: Training and error analysis
During our training with the BioASQ data, learning rate is set to 3e-5, as mentioned in the BioBERT paper BIBREF0. We started training the model with 495 available train data and 35 test data by setting number of epochs to 50. After training with these hyper-parameters training accuracy(exact match) was 99.3%(overfitting) and testing accuracy is only 4%. In the next iteration we reduced the number of epochs to 25 then training accuracy is reduced to 98.5% and test accuracy moved to 5%. We further reduced number of epochs to 15, and the resulting training accuracy was 70% and test accuracy 15%. In the next iteration set number of epochs to 12 and achieved train accuracy of 57.7% and test accuracy 23.3%. Repeated the experiment with 11 epochs and found training accuracy to be 57.7% and test accuracy to be same 22%. In the next iteration we set number of epochs to '9' and found training accuracy of 48% and test accuracy of 15%. Hence optimum number of epochs is taken as 12 epochs.
During our error analysis we found that on test data, model tends to return text in the beginning of the context(paragraph) as the answer. On analysing train data, we found that there are '120'(out of '495') question answering data instances having start_index:0, meaning 120( 25%) question answering data has first word(s) in the context(paragraph) as the answer. We removed 70% of those instances in order to make train data more balanced. In the new train data set we are left with '411' question answering data instances. This time we got the highest test accuracy of 26% at 11 epochs. We have submitted our results for BioASQ test batch-2, got strict accuracy of 32% and our system stood in 2nd place. Initially, hyper-parameter- 'batch size' is set to ‘400’. Later it is tuned to '32'. Although accuracy(exact answer match) remained at 26%, model generated concise and better answers at batch size ‘32’, that is wrong answers are close to the expected answer in good number of cases.
Example.(from BIBREF8)
Question: Which mutated gene causes Chediak Higashi Syndrome?
Exact Answer: ‘lysosomal trafficking regulator gene’.
The answer returned by a model trained at ‘400’ batch size is ‘Autosomal-recessive complicated spastic paraplegia with a novel lysosomal trafficking regulator’, and from the one trained at ‘32’ batch size is ‘lysosomal trafficking regulator’.
In further experiments, we have fine tuned the BioBERT model with both ‘SQuAD’ dataset (version 2.0) and BioAsq train data. For training on ‘SQuAD’, hyper parameters- Learning rate and number of epochs are set to ‘3e-3’ and ‘3’ respectively as mentioned in the paper BIBREF1. Test accuracy of the model boosted to 44%. In one more experiment we trained model only on ‘SQuAD’ dataset, this time test accuracy of the model moved to 47%. The reason model did not perform up to the mark when trained with ‘SQuAD’ alongside BioASQ data could be that in formatted BioASQ data, start_index for the answer is not accurate, and affected the overall accuracy.
Our Systems and Their Performance on Factoid Questions
We have experimented with several systems and their variations, e.g. created by training with specific additional features (see next subsection). Here is their list and short descriptions. Unfortunately we did not pay attention to naming, and the systems evolved between test batches, so the overall picture can only be understood by looking at the details.
When we started the experiments our objective was to see whether BioBERT and entailment-based techniques can provide value for in the context of biomedical question answering. The answer to both questions was a yes, qualified by many examples clearly showing the limitations of both methods. Therefore we tried to address some of these limitations using feature engineering with mixed results: some clear errors got corrected and new errors got introduced, without overall improvement but convincing us that in future experiments it might be worth trying feature engineering again especially if more training data were available.
Overall we experimented with several approaches with the following aspects of the systems changing between batches, that is being absent or present:
training on BioAsq data vs. training on SQuAD
using the BioAsq snippets for context vs. using the documents from the provided URLs for context
adding or not the LAT, i.e. lexical answer type, feature (see BIBREF9, BIBREF10 and an explanation in the subsection just below).
For Yes/No questions (only) we experimented with the entailment methods.
We will discuss the performance of these models below and in Section 6. But before we do that, let us discuss a feature engineering experiment which eventually produced mixed results, but where we feel it is potentially useful in future experiments.
Our Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative)
During error analysis we found that for some cases, answer being returned by the model is far away from what it is being asked in the Question.
Example: (from BIBREF8)
Question: Hy's law measures failure of which organ?
Actual Answer: ‘Liver’.
The answer returned by one of our models was ‘alanine aminotransferase’, which is an enzyme. The model returns an enzyme, when the question asked for the organ name. To address this type of errors, we decided to try the concepts of ‘Lexical Answer Type’ (LAT) and Focus Word, which was used in IBM Watson, see BIBREF11 for overview; BIBREF10 for technical details, and BIBREF9 for details on question analysis. In an example given in the last source we read:
POETS & POETRY: He was a bank clerk in the Yukon before he published "Songs of a Sourdough" in 1907.
The focus is the part of the question that is a reference to the answer. In the example above, the focus is "he".
LATs are terms in the question that indicate what type of entity is being asked for.
The headword of the focus is generally a LAT, but questions often contain additional LATs, and in the Jeopardy! domain, categories are an additional source of LATs.
(...) In the example, LATs are "he", "clerk", and "poet".
For example in the question "Which plant does oleuropein originate from?" (BIBREF8). The LAT here is ‘plant’. For the BioAsq task we did not need to explicitly distinguish between the focus and the LAT concepts. In this example, the expectation is that answer returned by the model is a plant. Thus it is conceivable that the cosine distance between contextual embedding of word 'plant' in the question and contextual embedding for the answer present in the paragraph(context) is comparatively low. As a result model learns to adjust its weights during training phase and returns answers with low cosine distance with the LAT.
We used Stanford CoreNLP BIBREF12 library to write rules for extracting lexical answer type present in the question, both 'parts of speech'(POS) and dependency parsing functionality was used. We incorporated the Lexical Answer Type into one of our systems, UNCC_QA1 in Batch 4. This system underperformed our system FACTOIDS by about 3% in the MRR measure, but corrected errors such as in the example above.
Our Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative) ::: Assumptions and rules for deriving lexical answer type.
There are different question types: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. Question words are identified through parts of speech tags: 'WDT', 'WRB' ,'WP'. We assumed that LAT is a ‘Noun’ and follows the question word. Often it was also a subject (nsubj). This process is illustrated in Fig.FIGREF15.
LAT computation was governed by a few simple rules, e.g. when a question has multiple words that are 'Subjects’ (and ‘Noun’), a word that is in proximity to the question word is considered as ‘LAT’. These rules are different for each "Wh" word.
Namely, when the word immediately following the question word is a Noun, window size is set to ‘3’. The window size ‘3’ means we iterate through the next ‘3’ words to check if any of the word is both Noun and Subject, If so, such word is considered the ‘LAT’; else the word that is present very next to the question word is considered as the ‘LAT’.
For questions with words ‘Which’ , ‘What’, ‘When’; a Noun immediately following the question word is very often the LAT, e.g. 'enzyme' in Which enzyme is targeted by Evolocumab?. When the word immediately following the question word is not a Noun, e.g. in What is the function of the protein Magt1? the window size is set to ‘5’, and we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as the ‘LAT’; else, the Noun in close proximity to the question word and following it is returned as the ‘LAT’.
For questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself. For the word ‘How', e.g. in How many selenoproteins are encoded in the human genome?, we look at the adjective and if we find one, we take it to be the LAT, otherwise the word 'How' is considered as the ‘LAT’.
Perhaps because of using only very simple rules, the accuracy for ‘LAT’ derivation is 75%; that is, in the remaining 25% of the cases the LAT word is identified incorrectly. Worth noting is that the overall performance the system that used LATs was slightly inferior to the system without LATs, but the types of errors changed. The training used BioBERT with the LAT feature as part of the input string. The errors it introduces usually involve finding the wrong element of the correct type e.g. wrong enzyme when two similar enzymes are described in the text, or 'neuron' when asked about a type of cell with a certain function, when the answer calls for a different cell category, adipocytes, and both are mentioned in the text. We feel with more data and additional tuning or perhaps using an ensemble model, we might be able to keep the correct answers, and improve the results on the confusing examples like the one mentioned above. Therefore if we improve our ‘LAT’ derivation logic, or have larger datasets, then perhaps the neural network techniques they will yield better results.
Our Systems and Their Performance on Factoid Questions ::: Impact of Training using BioAsq data (slightly negative)
Training on BioAsq data in our entry in Batch 1 and Batch 2 under the name QA1 showed it might lead to overfitting. This happened both with (Batch 2) and without (Batch 1) hyperparameters tuning: abysmal 18% MRR in Batch 1, and slighly better one, 40% in Batch 2 (although in Batch 2 it was overall the second best result in MRR but 16% lower than the highest score).
In Batch 3 (only), our UNCC_QA3 system was fine tuned on BioAsq and SQuAD 2.0 BIBREF7, and for data preprocessing Context paragraph is generated from relevant snippets provided in the test data. This system underperformed, by about 2% in MRR, our other entry UNCC_QA1, which was also an overall category winner for this batch. The latter was also trained on SQuAD, but not on BioAsq. We suspect that the reason could be the simplistic nature of the find() function described in Section 3.1. So, this could be an area where a better algorithm for finding the best occurrence of an entity could improve performance.
Our Systems and Their Performance on Factoid Questions ::: Impact of Using Context from URLs (negative)
In some experiments, for context in testing, we used documents for which URL pointers are provided in BioAsq. However, our system UNCC_QA3 underperformed our other system tested only on the provided snippets.
In Batch 5 the underperformance was about 6% of MRR, compared to our best system UNCC_QA1, and by 9% to the top performer.
Performance on Yes/No and List questions
Our work focused on Factoid questions. But we also have done experiments on List-type and Yes/No questions.
Performance on Yes/No and List questions ::: Entailment improves Yes/No accuracy
We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment. Our algorithm was very simple: Given a question we iterate through the candidate sentences and try to find any candidate sentence is contradicting the question (with confidence over 50%), if so 'No' is returned as answer, else 'Yes' is returned. In batch 4 this strategy produced better than the BioAsq baseline performance, and compared to our other systems, the use of entailment increased the performance by about 13% (macro F1 score). We used 'AllenNlp' BIBREF13 entailment library to find entailment of the candidate sentences with question.
Performance on Yes/No and List questions ::: For List-type the URLs have negative impact
Overall, we followed the similar strategy that's been followed for Factoid Question Answering task. We started our experiment with batch 2, where we submitted 20 best answers (with context from snippets). Starting with batch 3, we performed post processing: once models generate answer predictions (n-best predictions), we do post-processing on the predicted answers. In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures.
In the post-processing phase, we take the top ‘20’ (batch 3) and top 5 (batch 4 and 5), predicted answers, tokenize them using common separators: 'comma' , 'and', 'also', 'as well as'. Tokens with characters count more than ‘100’ are eliminated and rest of the tokens are added to the list of possible answers. BioASQ evaluation mechanism does not consider snippets with more than ‘100’ characters as a valid answer. Considering lengthy snippets in to the list of answers would reduce the mean precision score. As a final step, duplicate snippets in the answer pool are removed. For example, consider these top 3 answers predicted by the system (before post-processing):
{
"text": "dendritic cells",
"probability": 0.7554540733426441,
"start_logit": 8.466046333312988,
"end_logit": 9.536355018615723
},
{
"text": "neutrophils, macrophages and
distinct subtypes of dendritic cells",
"probability": 0.13806867348304214,
"start_logit": 6.766478538513184,
"end_logit": 9.536355018615723
},
{
"text": "macrophages and distinct subtypes of dendritic",
"probability": 0.013973475271178242,
"start_logit": 6.766478538513184,
"end_logit": 7.24576473236084
},
After execution of post-processing heuristics, the list of answers returned is as follows:
["dendritic cells"],
["neutrophils"],
["macrophages"],
["distinct subtypes of dendritic cells"]
Summary of our results
The tables below summarize all our results. They show that the performance of our systems was mixed. The simple architectures and algorithm we used worked very well only in Batch 3. However, we feel we can built a better system based on this experience. In particular we observed both the value of contextual embeddings and of feature engineering (LAT), however we failed to combine them properly.
Summary of our results ::: Factoid questions ::: Systems used in Batch 5 experiments
System description for ‘UNCC_QA1’: The system was finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.
System description for ‘QA1’ : ‘LAT’ feature was added and finetuned with SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.
System description for ‘UNCC_QA3’ : Fine tuning process is same as it is done for the system ‘UNCC_QA1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated from the relevant documents for which URLS are included in the test data.
Summary of our results ::: List Questions
For List-type questions, although post processing helped in the later batches, we never managed to obtain competitive precision, although our recall was good.
Summary of our results ::: Yes/No questions
The only thing worth remembering from our performance is that using entailment can have a measurable impact (at least with respect to a weak baseline). The results (weak) are in Table 3.
Discussion, Future Experiments, and Conclusions ::: Summary:
In contrast to 2018, when we submitted BIBREF2 to BioASQ a system based on extractive summarization (and scored very high in the ideal answer category), this year we mainly targeted factoid question answering task and focused on experimenting with BioBERT. After these experiments we see the promise of BioBERT in QA tasks, but we also see its limitations. The latter we tried to address with mixed results using feature engineering. Overall these experiments allowed us to secure a best and a second best score in different test batches. Along with Factoid-type question, we also tried ‘Yes/No’ and ‘List’-type questions, and did reasonably well with our very simple approach.
For Yes/No the moral worth remembering is that reasoning has a potential to influence results, as evidenced by our adding the AllenNLP entailment BIBREF13 system increased its performance.
All our data and software is available at Github, in the previously referenced URL (end of Section 2).
Discussion, Future Experiments, and Conclusions ::: Future experiments
In the current model, we have a shallow neural network with a softmax layer for predicting answer span. Shallow networks however are not good at generalizations. In our future experiments we would like to create dense question answering neural network with a softmax layer for predicting answer span. The main idea is to get contextual word embedding for the words present in the question and paragraph (Context) and feed the contextual word embeddings retrieved from the last layer of BioBERT to the dense question answering network. The mentioned dense layered question answering neural network need to be tuned for finding right hyper parameters. An example of such architecture is shown in Fig.FIGREF30.
In one more experiment, we would like to add a better version of ‘LAT’ contextual word embedding as a feature, along with the actual contextual word embeddings for question text, and Context and feed them as input to the dense question answering neural network. By this experiment, we would like to find if ‘LAT’ feature is improving overall answer prediction accuracy. Adding ‘LAT’ feature this way instead of feeding this word piece embedding directly to the BioBERT (as we did in our above experiments) would not downgrade the quality of contextual word embeddings generated form ‘BioBERT'. Quality contextual word embeddings would lead to efficient transfer learning and chances are that it would improve the model's answer prediction accuracy.
We also see potential for incorporating domain specific inference into the task e.g. using the MedNLI dataset BIBREF14. For all types of experiments it might be worth exploring clinical BERT embeddings BIBREF15, explicitly incorporating domain knowledge (e.g. BIBREF16) and possibly deeper discourse representations (e.g. BIBREF17).
APPENDIX
In this appendix we provide additional details about the implementations.
APPENDIX ::: Systems and their descriptions:
We used several variants of our systems when experimenting with the BioASQ problems. In retrospect, it would be much easier to understand the changes if we adopted some mnemonic conventions in naming the systems. So, we apologize for the names that do not reflect the modifications, and necessitate this list.
APPENDIX ::: Systems and their descriptions: ::: Factoid Type Question Answering:
We preprocessed the test data to convert test data to BioBERT format, We generated Context/paragraph by either aggregating relevant snippets provided or by aggregating documents for which URLS are provided in the BioASQ test data.
APPENDIX ::: Systems and their descriptions: ::: System description for QA1:
We generated Context/paragraph by aggregating relevant snippets available in the test data and mapped it against the question text and question id. We ignored the content present in the documents (document URLS were provided in the original test data). The model is finetuned with BioASQ data.
data preprocessing is done in the same way as it is done for test batch-1. Model fine tuned on BioASQ data.
‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA_1:
System is finetuned on the SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
The System is finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA3:
System is finetuned on the SQuAD 2.0 [reference] and BioASQ dataset[].For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
Fine tuning process is same as it is done for the system ‘UNCC_QA_1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated form from the relevant documents for which URLS are included in the test data.
APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA2:
Fine tuning process is same as for ‘UNCC_QA_1 ’. Difference is Context/paragraph is generated form from the relevant documents for which URLS are included in the test data. System ‘UNCC_QA_1’ got the highest ‘MRR’ score in the 3rd test batch set.
APPENDIX ::: Systems and their descriptions: ::: System description for FACTOIDS:
The System is finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.
APPENDIX ::: Systems and their descriptions: ::: List Type Questions:
We attempted List type questions starting from test batch ‘2’. Used similar approach that's been followed for Factoid Question answering task. For all the test batch sets, in the data pre processing phase Context/ paragraph is generated either by aggregating relevant snippets or by aggregating documents(URLS) provided in the BioASQ test data.
For test batch-2, model (System: QA1) is finetuned on BioASQ data and submitted top ‘20’ answers predicted by the model as the list of answers. system ‘QA1’ achieved low F-Measure score:‘0.0786’ in the second test batch. In the further test batches for List type questions, we finetuned the model on Squad data set [reference], implemented post processing techniques (refer section 5.2) and achieved a better F-measure score: ‘0.2862’ in the final test batch set.
In test batch-3 (Systems : ‘QA1’/’’UNCC_QA_1’/’UNCC_QA3’/’UNCC_QA2’) top 20 answers returned by the model is sent for post processing and in test batch 4 and 5 only top 5 answers are sent for post processing. System UNCC_QA2(in batch 3) for List type question answering, Context is generated from documents for which URLS are provided in the BioASQ test data. for the rest of the systems (in test batch-3) for List Type question answering task snippets present in the BioaSQ test data are used to generate context.
In test batch-4 (System : ‘FACTOIDS’/’UNCC_QA_1’/’UNCC_QA3’’) top 5 answers returned by the model is sent for post processing. In case of system ‘FACTOIDS’ snippets in the test data were used to generate context. for systems ’UNCC_QA_1’ and ’UNCC_QA3’ context is generated from the documents for which URLS are provided in the BioASQ test data.
In test batch-5 ( Systems: ‘QA1’/’UNCC_QA_1’/’UNCC_QA3’/’UNCC_QA2’ ) our approach is the same as that of test batch-4 where top 5 answers returned by the model is sent for post processing. for all the systems (in test batch-5) context is generated from the snippets provided in the BioASQ test data.
APPENDIX ::: Systems and their descriptions: ::: Yes/No Type Questions:
For the first 3 test batches, We have submitted answer ‘Yes’ to all the questions. Later, we employed ‘Sentence Entailment’ techniques(refer section 6.0) for the fourth and fifth test batch sets. Our Systems with ‘Sentence Entailment’ approach (for ‘Yes’/ ‘No’ question answering): ‘UNCC_QA_1’(test batch-4), UNCC_QA3(test batch-5).
APPENDIX ::: Additional details for Yes/No Type Questions
We used Textual Entailment in Batch 4 and 5 for ‘Yes’/‘No’ question type. The algorithm was very simple: Given a question we iterate through the candidate sentences, and look for any candidate sentences contradicting the question. If we find one 'No' is returned as answer, else 'Yes' is returned. (The confidence for contradiction was set at 50%) We used AllenNLP BIBREF13 entailment library to find entailment of the candidate sentences with question.
Flow Chart for Yes/No Question answer processing is shown in Fig.FIGREF51
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions
There are different question types, and we distinguished them based on the question words: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. How are question words identified? question words have parts of speech(POS): 'WDT', 'WRB', 'WP'.
Assumptions:
1) Lexical answer type (‘LAT’) or focus word is of type Noun and follows the question word.
2) The LAT word is a Subject. (This clearly not always true, but we used a very simple method). Note: ‘StanfordNLP’ dependency parsing tag for identifying subject is 'nsubj' or 'nsubjpass'.
3) When a question has multiple words that are of type Subject (and Noun), a word that is in proximity to the question word is considered as ‘LAT’.
4) For questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself that is, ‘When’, ‘Who’, ‘Why’ respectively.
Rules and logic flow to traverse a question: The three cases below describe the logic flow of finding LATs. The figures show the grammatical structures used for this purpose.
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-1:
Question with question word ‘How’.
For questions with question word 'How', the adjective that follows the question word is considered as ‘LAT’ (need not follow immediately). If an adjective is absent, word 'How' is considered as ‘LAT’. When there are multiple words that are adjectives, a word in close proximity to the question word and follows it is returned as ‘LAT’. Note: The part of speech tag to identify adjectives is 'JJ'. For Other possible question words like ‘whose’. ‘LAT’/Focus word is question words itself.
Example Question: How many selenoproteins are encoded in the human genome?
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-2:
Questions with question words ‘Which’ , ‘What’ and all other possible question words; a 'Noun' immediately following the question word.
Example Question: Which enzyme is targeted by Evolocumab?
Here, Focus word/LAT is ‘enzyme’ which is both Noun and Subject and immediately follows the question word.
When the word immediately following the question word is a noun, the window size is set to ‘3’. This size ‘3’ means that we iterate through the next ‘3’ words (if present) to check if any of the word is both 'Noun' and 'Subject', If so, the word is considered as ‘LAT’/Focus Word. Else the word that is present very next to the question word is considered as ‘LAT’.
APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-3:
Questions with question words ‘Which’ , ‘What’ and all other possible question words; word immediately following the question word is not a 'Noun'.
Example Question: What is the function of the protein Magt1?
Here, Focus word/LAT is ‘function ’ which is both Noun and Subject and does not immediately follow the question word.
When the very next word following the question word is not a Noun, window size is set to ‘5’. Window size ‘5’ corresponds that we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as ‘LAT’. Else, the 'Noun' close proximity to the question word and follows it is returned as ‘LAT’.
Ad we mentioned earlier, the accuracy for ‘LAT’ derivation is 75 percent. But clearly the simple logic described above can be improved, as shown in BIBREF9, BIBREF10. Whether this in turn produces improvements in this particular task is an open question.
APPENDIX ::: Proposing Future Experiments
In the current model, we have a shallow neural network with a softmax layer for predicting answer span. Shallow networks however are not good at generalizations. In our future experiments we would like to create dense question answering neural network with a softmax layer for predicting answer span. The main idea is to get contextual word embedding for the words present in the question and paragraph (Context) and feed the contextual word embeddings retrieved from the last layer of BioBERT to the dense question answering network. The mentioned dense layered question answering Neural network need to be tuned for finding right hyper parameters. An example of such architecture is shown in Fig.FIGREF30.
In another experiment we would like to only feed contextual word embeddings for Focus word/ ‘LAT’, paragraph/ Context as input to the question answering neural network. In this experiment we would neglect all embeddings for the question text except that of Focus word/ ‘LAT’. Our assumption and idea for considering focus word and neglecting remaining words in the question is that during training phase it would make more precise for the model to identify the focus of the question and map answers against the question’s focus. To validate our assumption, we would like to take sample question answering data and find the cosine distance between contextual embedding of Focus word and that of the actual answer and verify if the cosine distance is comparatively low in most of the cases.
In one more experiment, we would like to add a better version of ‘LAT’ contextual word embedding as a feature, along with the actual contextual word embeddings for question text, and Context and feed them as input to the dense question answering neural network. By this experiment, we would like to find if ‘LAT’ feature is improving overall answer prediction accuracy. Adding ‘LAT’ feature this way instead of feeding Focus word’s word piece embedding directly (as we did in our above experiments) to the BioBERT would not downgrade the quality of contextual word embeddings generated form ‘BioBERT'. Quality contextual word embeddings would lead to efficient transfer learning and chances are that it would improve the model's answer prediction accuracy.
|
What was their highest MRR score?
|
0.5115
| 6,810
|
qasper
|
8k
|
Introduction
Data annotation is a major bottleneck for the application of supervised learning approaches to many problems. As a result, unsupervised methods that learn directly from unlabeled data are increasingly important. For tasks related to unsupervised syntactic analysis, discrete generative models have dominated in recent years – for example, for both part-of-speech (POS) induction BIBREF0 , BIBREF1 and unsupervised dependency parsing BIBREF2 , BIBREF3 , BIBREF4 . While similar models have had success on a range of unsupervised tasks, they have mostly ignored the apparent utility of continuous word representations evident from supervised NLP applications BIBREF5 , BIBREF6 . In this work, we focus on leveraging and explicitly representing continuous word embeddings within unsupervised models of syntactic structure.
Pre-trained word embeddings from massive unlabeled corpora offer a compact way of injecting a prior notion of word similarity into models that would otherwise treat words as discrete, isolated categories. However, the specific properties of language captured by any particular embedding scheme can be difficult to control, and, further, may not be ideally suited to the task at hand. For example, pre-trained skip-gram embeddings BIBREF7 with small context window size are found to capture the syntactic properties of language well BIBREF8 , BIBREF9 . However, if our goal is to separate syntactic categories, this embedding space is not ideal – POS categories correspond to overlapping interspersed regions in the embedding space, evident in Figure SECREF4 .
In our approach, we propose to learn a new latent embedding space as a projection of pre-trained embeddings (depicted in Figure SECREF5 ), while jointly learning latent syntactic structure – for example, POS categories or syntactic dependencies. To this end, we introduce a new generative model (shown in Figure FIGREF6 ) that first generates a latent syntactic representation (e.g. a dependency parse) from a discrete structured prior (which we also call the “syntax model”), then, conditioned on this representation, generates a sequence of latent embedding random variables corresponding to each word, and finally produces the observed (pre-trained) word embeddings by projecting these latent vectors through a parameterized non-linear function. The latent embeddings can be jointly learned with the structured syntax model in a completely unsupervised fashion.
By choosing an invertible neural network as our non-linear projector, and then parameterizing our model in terms of the projection's inverse, we are able to derive tractable exact inference and marginal likelihood computation procedures so long as inference is tractable in the underlying syntax model. In sec:learn-with-inv we show that this derivation corresponds to an alternate view of our approach whereby we jointly learn a mapping of observed word embeddings to a new embedding space that is more suitable for the syntax model, but include an additional Jacobian regularization term to prevent information loss.
Recent work has sought to take advantage of word embeddings in unsupervised generative models with alternate approaches BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . BIBREF9 build an HMM with Gaussian emissions on observed word embeddings, but they do not attempt to learn new embeddings. BIBREF10 , BIBREF11 , and BIBREF12 extend HMM or dependency model with valence (DMV) BIBREF2 with multinomials that use word (or tag) embeddings in their parameterization. However, they do not represent the embeddings as latent variables.
In experiments, we instantiate our approach using both a Markov-structured syntax model and a tree-structured syntax model – specifically, the DMV. We evaluate on two tasks: part-of-speech (POS) induction and unsupervised dependency parsing without gold POS tags. Experimental results on the Penn Treebank BIBREF13 demonstrate that our approach improves the basic HMM and DMV by a large margin, leading to the state-of-the-art results on POS induction, and state-of-the-art results on unsupervised dependency parsing in the difficult training scenario where neither gold POS annotation nor punctuation-based constraints are available.
Model
As an illustrative example, we first present a baseline model for Markov syntactic structure (POS induction) that treats a sequence of pre-trained word embeddings as observations. Then, we propose our novel approach, again using Markov structure, that introduces latent word embedding variables and a neural projector. Lastly, we extend our approach to more general syntactic structures.
Example: Gaussian HMM
We start by describing the Gaussian hidden Markov model introduced by BIBREF9 , which is a locally normalized model with multinomial transitions and Gaussian emissions. Given a sentence of length INLINEFORM0 , we denote the latent POS tags as INLINEFORM1 , observed (pre-trained) word embeddings as INLINEFORM2 , transition parameters as INLINEFORM3 , and Gaussian emission parameters as INLINEFORM4 . The joint distribution of data and latent variables factors as:
DISPLAYFORM0
where INLINEFORM0 is the multinomial transition probability and INLINEFORM1 is the multivariate Gaussian emission probability.
While the observed word embeddings do inform this model with a notion of word similarity – lacking in the basic multinomial HMM – the Gaussian emissions may not be sufficiently flexible to separate some syntactic categories in the complex pre-trained embedding space – for example the skip-gram embedding space as visualized in Figure SECREF4 where different POS categories overlap. Next we introduce a new approach that adds flexibility to the emission distribution by incorporating new latent embedding variables.
Markov Structure with Neural Projector
To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model, we propose to cascade a neural network as a projection function, deterministically transforming the simple space defined by the Gaussian HMM to the observed embedding space. We denote the latent embedding of the INLINEFORM0 word in a sentence as INLINEFORM1 , and the neural projection function as INLINEFORM2 , parameterized by INLINEFORM3 . In the case of sequential Markov structure, our new model corresponds to the following generative process:
For each time step INLINEFORM0 ,
[noitemsep, leftmargin=*]
Draw the latent state INLINEFORM0
Draw the latent embedding INLINEFORM0
Deterministically produce embedding
INLINEFORM0
The graphical model is depicted in Figure FIGREF6 . The deterministic projection can also be viewed as sampling each observation from a point mass at INLINEFORM0 . The joint distribution of our model is: DISPLAYFORM0
where INLINEFORM0 is a conditional Gaussian distribution, and INLINEFORM1 is the Dirac delta function centered at INLINEFORM2 : DISPLAYFORM0
General Structure with Neural Projector
Our approach can be applied to a broad family of structured syntax models. We denote latent embedding variables as INLINEFORM0 , discrete latent variables in the syntax model as INLINEFORM1 ( INLINEFORM2 ), where INLINEFORM3 are conditioned to generate INLINEFORM4 . The joint probability of our model factors as:
DISPLAYFORM0
where INLINEFORM0 represents the probability of the syntax model, and can encode any syntactic structure – though, its factorization structure will determine whether inference is tractable in our full model. As shown in Figure FIGREF6 , we focus on two syntax models for syntactic analysis in this paper. The first is Markov-structured, which we use for POS induction, and the second is DMV-structured, which we use to learn dependency parses without supervision.
The marginal data likelihood of our model is: DISPLAYFORM0
While the discrete variables INLINEFORM0 can be marginalized out with dynamic program in many cases, it is generally intractable to marginalize out the latent continuous variables, INLINEFORM1 , for an arbitrary projection INLINEFORM2 in Eq. ( EQREF17 ), which means inference and learning may be difficult. In sec:opt, we address this issue by constraining INLINEFORM3 to be invertible, and show that this constraint enables tractable exact inference and marginal likelihood computation.
Learning & Inference
In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation.
Learning with Invertibility
For ease of exposition, we explain the learning algorithm in terms of Markov structure without loss of generality. As shown in Eq. ( EQREF17 ), the optimization challenge in our approach comes from the intractability of the marginalized emission factor INLINEFORM0 . If we can marginalize out INLINEFORM1 and compute INLINEFORM2 , then the posterior and marginal likelihood of our Markov-structured model can be computed with the forward-backward algorithm. We can apply Eq. ( EQREF14 ) and obtain : INLINEFORM3
By using the change of variable rule to the integration, which allows the integration variable INLINEFORM0 to be replaced by INLINEFORM1 , the marginal emission factor can be computed in closed-form when the invertibility condition is satisfied: DISPLAYFORM0
where INLINEFORM0 is a conditional Gaussian distribution, INLINEFORM1 is the Jacobian matrix of function INLINEFORM2 at INLINEFORM3 , and INLINEFORM4 represents the absolute value of its determinant. This Jacobian term is nonzero and differentiable if and only if INLINEFORM5 exists.
Eq. ( EQREF19 ) shows that we can directly calculate the marginal emission distribution INLINEFORM0 . Denote the marginal data likelihood of Gaussian HMM as INLINEFORM1 , then the log marginal data likelihood of our model can be directly written as: DISPLAYFORM0
where INLINEFORM0 represents the new sequence of embeddings after applying INLINEFORM1 to each INLINEFORM2 . Eq. ( EQREF20 ) shows that the training objective of our model is simply the Gaussian HMM log likelihood with an additional Jacobian regularization term. From this view, our approach can be seen as equivalent to reversely projecting the data through INLINEFORM3 to another manifold INLINEFORM4 that is directly modeled by the Gaussian HMM, with a regularization term. Intuitively, we optimize the reverse projection INLINEFORM5 to modify the INLINEFORM6 space, making it more appropriate for the syntax model. The Jacobian regularization term accounts for the volume expansion or contraction behavior of the projection. Maximizing it can be thought of as preventing information loss. In the extreme case, the Jacobian determinant is equal to zero, which means the projection is non-invertible and thus information is being lost through the projection. Such “information preserving” regularization is crucial during optimization, otherwise the trivial solution of always projecting data to the same single point to maximize likelihood is viable.
More generally, for an arbitrary syntax model the data likelihood of our approach is: DISPLAYFORM0
If the syntax model itself allows for tractable inference and marginal likelihood computation, the same dynamic program can be used to marginalize out INLINEFORM0 . Therefore, our joint model inherits the tractability of the underlying syntax model.
Invertible Volume-Preserving Neural Net
For the projection we can use an arbitrary invertible function, and given the representational power of neural networks they seem a natural choice. However, calculating the inverse and Jacobian of an arbitrary neural network can be difficult, as it requires that all component functions be invertible and also requires storage of large Jacobian matrices, which is memory intensive. To address this issue, several recent papers propose specially designed invertible networks that are easily trainable yet still powerful BIBREF16 , BIBREF17 , BIBREF19 . Inspired by these works, we use the invertible transformation proposed by BIBREF16 , which consists of a series of “coupling layers”. This architecture is specially designed to guarantee a unit Jacobian determinant (and thus the invertibility property).
From Eq. ( EQREF22 ) we know that only INLINEFORM0 is required for accomplishing learning and inference; we never need to explicitly construct INLINEFORM1 . Thus, we directly define the architecture of INLINEFORM2 . As shown in Figure FIGREF24 , the nonlinear transformation from the observed embedding INLINEFORM3 to INLINEFORM4 represents the first coupling layer. The input in this layer is partitioned into left and right halves of dimensions, INLINEFORM5 and INLINEFORM6 , respectively. A single coupling layer is defined as: DISPLAYFORM0
where INLINEFORM0 is the coupling function and can be any nonlinear form. This transformation satisfies INLINEFORM1 , and BIBREF16 show that its Jacobian matrix is triangular with all ones on the main diagonal. Thus the Jacobian determinant is always equal to one (i.e. volume-preserving) and the invertibility condition is naturally satisfied.
To be sufficiently expressive, we compose multiple coupling layers as suggested in BIBREF16 . Specifically, we exchange the role of left and right half vectors at each layer as shown in Figure FIGREF24 . For instance, from INLINEFORM0 to INLINEFORM1 the left subset INLINEFORM2 is unchanged, while from INLINEFORM3 to INLINEFORM4 the right subset INLINEFORM5 remains the same. Also note that composing multiple coupling layers does not change the volume-preserving and invertibility properties. Such a sequence of invertible transformations from the data space INLINEFORM6 to INLINEFORM7 is also called normalizing flow BIBREF20 .
Experiments
In this section, we first describe our datasets and experimental setup. We then instantiate our approach with Markov and DMV-structured syntax models, and report results on POS tagging and dependency grammar induction respectively. Lastly, we analyze the learned latent embeddings.
Data
For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus.
General Experimental Setup
For the neural projector, we employ rectified networks as coupling function INLINEFORM0 following BIBREF16 . We use a rectified network with an input layer, one hidden layer, and linear output units, the number of hidden units is set to the same as the number of input units. The number of coupling layers are varied as 4, 8, 16 for both tasks. We optimize marginal data likelihood directly using Adam BIBREF22 . For both tasks in the fully unsupervised setting, we do not tune the hyper-parameters using supervised data.
Unsupervised POS tagging
For unsupervised POS tagging, we use a Markov-structured syntax model in our approach, which is a popular structure for unsupervised tagging tasks BIBREF9 , BIBREF10 .
Following existing literature, we train and test on the entire WSJ corpus (49208 sentences, 1M tokens). We use 45 tag clusters, the number of POS tags that appear in WSJ corpus. We train the discrete HMM and the Gaussian HMM BIBREF9 as baselines. For the Gaussian HMM, mean vectors of Gaussian emissions are initialized with the empirical mean of all word vectors with an additive noise. We assume diagonal covariance matrix for INLINEFORM0 and initialize it with the empirical variance of the word vectors. Following BIBREF9 , the covariance matrix is fixed during training. The multinomial probabilities are initialized as INLINEFORM1 , where INLINEFORM2 . For our approach, we initialize the syntax model and Gaussian parameters with the pre-trained Gaussian HMM. The weights of layers in the rectified network are initialized from a uniform distribution with mean zero and a standard deviation of INLINEFORM3 , where INLINEFORM4 is the input dimension. We evaluate the performance of POS tagging with both Many-to-One (M-1) accuracy BIBREF23 and V-Measure (VM) BIBREF24 . Given a model we found that the tagging performance is well-correlated with the training data likelihood, thus we use training data likelihood as a unsupervised criterion to select the trained model over 10 random restarts after training 50 epochs. We repeat this process 5 times and report the mean and standard deviation of performance.
We compare our approach with basic HMM, Gaussian HMM, and several state-of-the-art systems, including sophisticated HMM variants and clustering techniques with hand-engineered features. The results are presented in Table TABREF32 . Through the introduced latent embeddings and additional neural projection, our approach improves over the Gaussian HMM by 5.4 points in M-1 and 5.6 points in VM. Neural HMM (NHMM) BIBREF10 is a baseline that also learns word representation jointly. Both their basic model and extended Conv version does not outperform the Gaussian HMM. Their best model incorporates another LSTM to model long distance dependency and breaks the Markov assumption, yet our approach still achieves substantial improvement over it without considering more context information. Moreover, our method outperforms the best published result that benefits from hand-engineered features BIBREF27 by 2.0 points on VM.
We found that most tagging errors happen in noun subcategories. Therefore, we do the one-to-one mapping between gold POS tags and induced clusters and plot the normalized confusion matrix of noun subcategories in Figure FIGREF35 . The Gaussian HMM fails to identify “NN” and “NNS” correctly for most cases, and it often recognizes “NNPS” as “NNP”. In contrast, our approach corrects these errors well.
Unsupervised Dependency Parsing without gold POS tags
For the task of unsupervised dependency parse induction, we employ the Dependency Model with Valence (DMV) BIBREF2 as the syntax model in our approach. DMV is a generative model that defines a probability distribution over dependency parse trees and syntactic categories, generating tokens and dependencies in a head-outward fashion. While, traditionally, DMV is trained using gold POS tags as observed syntactic categories, in our approach, we treat each tag as a latent variable, as described in sec:general-neural.
Most existing approaches to this task are not fully unsupervised since they rely on gold POS tags following the original experimental setup for DMV. This is partially because automatically parsing from words is difficult even when using unsupervised syntactic categories BIBREF29 . However, inducing dependencies from words alone represents a more realistic experimental condition since gold POS tags are often unavailable in practice. Previous work that has trained from words alone often requires additional linguistic constraints (like sentence internal boundaries) BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , acoustic cues BIBREF33 , additional training data BIBREF4 , or annotated data from related languages BIBREF34 . Our approach is naturally designed to train on word embeddings directly, thus we attempt to induce dependencies without using gold POS tags or other extra linguistic information.
Like previous work we use sections 02-21 of WSJ corpus as training data and evaluate on section 23, we remove punctuations and train the models on sentences of length INLINEFORM0 , “head-percolation” rules BIBREF39 are applied to obtain gold dependencies for evaluation. We train basic DMV, extended DMV (E-DMV) BIBREF35 and Gaussian DMV (which treats POS tag as unknown latent variables and generates observed word embeddings directly conditioned on them following Gaussian distribution) as baselines. Basic DMV and E-DMV are trained with Viterbi EM BIBREF40 on unsupervised POS tags induced from our Markov-structured model described in sec:pos. Multinomial parameters of the syntax model in both Gaussian DMV and our model are initialized with the pre-trained DMV baseline. Other parameters are initialized in the same way as in the POS tagging experiment. The directed dependency accuracy (DDA) is used for evaluation and we report accuracy on sentences of length INLINEFORM1 and all lengths. We train the parser until training data likelihood converges, and report the mean and standard deviation over 20 random restarts.
Our model directly observes word embeddings and does not require gold POS tags during training. Thus, results from related work trained on gold tags are not directly comparable. However, to measure how these systems might perform without gold tags, we run three recent state-of-the-art systems in our experimental setting: UR-A E-DMV BIBREF36 , Neural E-DMV BIBREF11 , and CRF Autoencoder (CRFAE) BIBREF37 . We use unsupervised POS tags (induced from our Markov-structured model) in place of gold tags. We also train basic DMV on gold tags and include several state-of-the-art results on gold tags as reference points.
As shown in Table TABREF39 , our approach is able to improve over the Gaussian DMV by 4.8 points on length INLINEFORM0 and 4.8 points on all lengths, which suggests the additional latent embedding layer and neural projector are helpful. The proposed approach yields, to the best of our knowledge, state-of-the-art performance without gold POS annotation and without sentence-internal boundary information. DMV, UR-A E-DMV, Neural E-DMV, and CRFAE suffer a large decrease in performance when trained on unsupervised tags – an effect also seen in previous work BIBREF29 , BIBREF34 . Since our approach induces latent POS tags jointly with dependency trees, it may be able to learn POS clusters that are more amenable to grammar induction than the unsupervised tags. We observe that CRFAE underperforms its gold-tag counterpart substantially. This may largely be a result of the model's reliance on prior linguistic rules that become unavailable when gold POS tag types are unknown. Many extensions to DMV can be considered orthogonal to our approach – they essentially focus on improving the syntax model. It is possible that incorporating these more sophisticated syntax models into our approach may lead to further improvements.
Sensitivity Analysis
In the above experiments we initialize the structured syntax components with the pre-trained Gaussian or discrete baseline, which is shown as a useful technique to help train our deep models. We further study the results with fully random initialization. In the POS tagging experiment, we report the results in Table TABREF48 . While the performance with 4 layers is comparable to the pre-trained Gaussian initialization, deeper projections (8 or 16 layers) result in a dramatic drop in performance. This suggests that the structured syntax model with very deep projections is difficult to train from scratch, and a simpler projection might be a good compromise in the random initialization setting.
Different from the Markov prior in POS tagging experiments, our parsing model seems to be quite sensitive to the initialization. For example, directed accuracy of our approach on sentences of length INLINEFORM0 is below 40.0 with random initialization. This is consistent with previous work that has noted the importance of careful initialization for DMV-based models such as the commonly used harmonic initializer BIBREF2 . However, it is not straightforward to apply the harmonic initializer for DMV directly in our model without using some kind of pre-training since we do not observe gold POS.
We investigate the effect of the choice of pre-trained embedding on performance while using our approach. To this end, we additionally include results using fastText embeddings BIBREF41 – which, in contrast with skip-gram embeddings, include character-level information. We set the context windows size to 1 and the dimension size to 100 as in the skip-gram training, while keeping other parameters set to their defaults. These results are summarized in Table TABREF50 and Table TABREF51 . While fastText embeddings lead to reduced performance with our model, our approach still yields an improvement over the Gaussian baseline with the new observed embeddings space.
Qualitative Analysis of Embeddings
We perform qualitative analysis to understand how the latent embeddings help induce syntactic structures. First we filter out low-frequency words and punctuations in WSJ, and visualize the rest words (10k) with t-SNE BIBREF42 under different embeddings. We assign each word with its most likely gold POS tags in WSJ and color them according to the gold POS tags.
For our Markov-structured model, we have displayed the embedding space in Figure SECREF5 , where the gold POS clusters are well-formed. Further, we present five example target words and their five nearest neighbors in terms of cosine similarity. As shown in Table TABREF53 , the skip-gram embedding captures both semantic and syntactic aspects to some degree, yet our embeddings are able to focus especially on the syntactic aspects of words, in an unsupervised fashion without using any extra morphological information.
In Figure FIGREF54 we depict the learned latent embeddings with the DMV-structured syntax model. Unlike the Markov structure, the DMV structure maps a large subset of singular and plural nouns to the same overlapping region. However, two clusters of singular and plural nouns are actually separated. We inspect the two clusters and the overlapping region in Figure FIGREF54 , it turns out that the nouns in the separated clusters are words that can appear as subjects and, therefore, for which verb agreement is important to model. In contrast, the nouns in the overlapping region are typically objects. This demonstrates that the latent embeddings are focusing on aspects of language that are specifically important for modeling dependency without ever having seen examples of dependency parses. Some previous work has deliberately created embeddings to capture different notions of similarity BIBREF43 , BIBREF44 , while they use extra morphology or dependency annotations to guide the embedding learning, our approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora.
Related Work
Our approach is related to flow-based generative models, which are first described in NICE BIBREF16 and have recently received more attention BIBREF17 , BIBREF19 , BIBREF18 . This relevant work mostly adopts simple (e.g. Gaussian) and fixed priors and does not attempt to learn interpretable latent structures. Another related generative model class is variational auto-encoders (VAEs) BIBREF45 that optimize a lower bound on the marginal data likelihood, and can be extended to learn latent structures BIBREF46 , BIBREF47 . Against the flow-based models, VAEs remove the invertibility constraint but sacrifice the merits of exact inference and exact log likelihood computation, which potentially results in optimization challenges BIBREF48 . Our approach can also be viewed in connection with generative adversarial networks (GANs) BIBREF49 that is a likelihood-free framework to learn implicit generative models. However, it is non-trivial for a gradient-based method like GANs to propagate gradients through discrete structures.
Conclusion
In this work, we define a novel generative approach to leverage continuous word representations for unsupervised learning of syntactic structure. Experiments on both POS induction and unsupervised dependency parsing tasks demonstrate the effectiveness of our proposed approach. Future work might explore more sophisticated invertible projections, or recurrent projections that jointly transform the entire input sequence.
|
What datasets do they evaluate on?
|
Wall Street Journal (WSJ) portion of the Penn Treebank
| 4,327
|
qasper
|
8k
|
Introduction
Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to.
The main focus of this work is to improve the relation detection subtask and further explore how it can contribute to the KBQA system. Although general relation detection methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M BIBREF2 , contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions BIBREF2 data set has 14% of the golden test relations not observed in golden training tuples. Third, as shown in Figure 1 (b), for some KBQA tasks like WebQuestions BIBREF0 , we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks.
This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching.
In order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers.
Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks.
Background: Different Granularity in KB Relations
Previous research BIBREF4 , BIBREF20 formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work.
(1) Relation Name as a Single Token (relation-level). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure 1 , when treating relation names as single tokens, it will be difficult to match the questions to relation names “episodes_written” and “starring_roles” if these names do not appear in training data – their relation embeddings $\mathbf {h}^r$ s will be random vectors thus are not comparable to question embeddings $\mathbf {h}^q$ s.
(2) Relation as Word Sequence (word-level). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure 1 (b), when doing only word-level matching, it is difficult to rank the target relation “starring_roles” higher compared to the incorrect relation “plays_produced”. This is because the incorrect relation contains word “plays”, which is more similar to the question (containing word “play”) in the embedding space. On the other hand, if the target relation co-occurs with questions related to “tv appearance” in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like “tv show” and “play on”.
The two types of relation representation contain different levels of abstraction. As shown in Table 1 , the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section "Improved KB Relation Detection" gives the details of our proposed approach.
Improved KB Relation Detection
This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations.
Relation Representations from Different Granularity
We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\mathbf {r}=\lbrace r^{word}_1,\cdots ,r^{word}_{M_1}\rbrace \cup \lbrace r^{rel}_1,\cdots ,r^{rel}_{M_2}\rbrace $ , where the first $M_1$ tokens are words (e.g. {episode, written}), and the last $M_2$ tokens are relation names, e.g., {episode_written} or {starring_roles, series} (when the target is a chain like in Figure 1 (b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\mathbf {B}^{word}_{1:M_1}:\mathbf {B}^{rel}_{1:M_2}]$ (each row vector $\mathbf {\beta }_i$ is the concatenation between forward/backward representations at $i$ ). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply one max-pooling on these two sets of vectors and get the final relation representation $\mathbf {h}^r$ .
Different Abstractions of Questions Representations
From Table 1 , we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths.
As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\mathbf {q}=\lbrace q_1,\cdots ,q_N\rbrace $ and gets hidden representations $\mathbf {\Gamma }^{(1)}_{1:N}=[\mathbf {\gamma }^{(1)}_1;\cdots ;\mathbf {\gamma }^{(1)}_N]$ . The second-layer BiLSTM works on $\mathbf {\Gamma }^{(1)}_{1:N}$ to get the second set of hidden representations $\mathbf {\Gamma }^{(2)}_{1:N}$ . Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer.
Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem.
Hierarchical Matching between Relation and Question
Now we have question contexts of different lengths encoded in $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\Gamma }^{(2)}_{1:N}$ . Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (Hierarchical Matching). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table 1 , the relation word written could be matched to either the same single word in the question or a much longer phrase be the writer of.
We could perform the above hierarchical matching by computing the similarity between each layer of $\mathbf {\Gamma }$ and $\mathbf {h}^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table 2 ). Our analysis in Section "Relation Detection Results" shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult.
To overcome the above difficulties, we adopt the idea from Residual Networks BIBREF23 for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such Hierarchical Residual Matching: (1) Connecting each $\mathbf {\gamma }^{(1)}_i$ and $\mathbf {\gamma }^{(2)}_i$ , resulting in a $\mathbf {\gamma }^{^{\prime }}_i=\mathbf {\gamma }^{(1)}_i + \mathbf {\gamma }^{(2)}_i$ for each position $i$ . Then the final question representation $\mathbf {h}^q$ becomes a max-pooling over all $\mathbf {\gamma }^{^{\prime }}_i$ s, 1 $\le $ i $\le $ $N$ . (2) Applying max-pooling on $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\gamma }^{(2)}_i$0 to get $\mathbf {\gamma }^{(2)}_i$1 and $\mathbf {\gamma }^{(2)}_i$2 , respectively, then setting $\mathbf {\gamma }^{(2)}_i$3 . Finally we compute the matching score of $\mathbf {\gamma }^{(2)}_i$4 given $\mathbf {\gamma }^{(2)}_i$5 as $\mathbf {\gamma }^{(2)}_i$6 .
Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier.
During training we adopt a ranking loss to maximizing the margin between the gold relation $\mathbf {r}^+$ and other relations $\mathbf {r}^-$ in the candidate pool $R$ .
$$l_{\mathrm {rel}} = \max \lbrace 0, \gamma - s_{\mathrm {rel}}(\mathbf {r}^+; \mathbf {q}) + s_{\mathrm {rel}}(\mathbf {r}^-; \mathbf {q})\rbrace \nonumber $$ (Eq. 12)
where $\gamma $ is a constant parameter. Fig 2 summarizes the above Hierarchical Residual BiLSTM (HR-BiLSTM) model.
Another way of hierarchical matching consists in relying on attention mechanism, e.g. BIBREF24 , to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table 2 ).
KBQA Enhanced by Relation Detection
This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build.
Following previous work BIBREF4 , BIBREF5 , our KBQA system takes an existing entity linker to produce the top- $K$ linked entities, $EL_K(q)$ , for a question $q$ (“initial entity linking”). Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm "KBQA Enhanced by Relation Detection" .
[htbp] InputInput OutputOutput Top query tuple $(\hat{e},\hat{r}, \lbrace (c, r_c)\rbrace )$ Entity Re-Ranking (first-step relation detection): Use the raw question text as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$ ; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL^{\prime }_{K^{\prime }}(q)$ containing the top- $K^{\prime }$ entity candidates (Section "Entity Re-Ranking" ) Relation Detection: Detect relation(s) using the reformatted question text in which the topic entity is replaced by a special token $<$ e $>$ (Section "Relation Detection" ) Query Generation: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section "Query Generation" ) Constraint Detection (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $EL_K(q)$0 (connecting by a relation $EL_K(q)$1 ) , add the high scoring $EL_K(q)$2 and $EL_K(q)$3 to the query (Section "Constraint Detection" ). KBQA with two-step relation detection
Compared to previous approaches, the main difference is that we have an additional entity re-ranking step after the initial entity linking. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig 1 (a), there are TV writer and baseball player “Mike Kelley”, which is impossible to distinguish with only entity name matching.
Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the initial entity linking with relations detected in questions.
Sections "Entity Re-Ranking" and "Relation Detection" elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process.
Entity Re-Ranking
In this step, we use the raw question text as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$ . We call this step relation detection on entity set since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. "Improved KB Relation Detection" . For each question $q$ , after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ( $R^{l}_q$ ) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$ , given the original entity linker score $s_{linker}$ , and the score of the most confident relation $r\in R_q^{l} \cap R_e$ , we sum these two scores to re-rank the entities:
$$s_{\mathrm {rerank}}(e;q) =& \alpha \cdot s_{\mathrm {linker}}(e;q) \nonumber \\ + & (1-\alpha ) \cdot \max _{r \in R_q^{l} \cap R_e} s_{\mathrm {rel}}(r;q).\nonumber $$ (Eq. 15)
Finally, we select top $K^{\prime }$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K^{\prime }}^{^{\prime }}(q)$ .
We use the same example in Fig 1 (a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as “episodes_written”, “author_of” and “profession”. Then, according to the connections of entity candidates in KB, we find that the TV writer “Mike Kelley” will be scored higher than the baseball player “Mike Kelley”, because the former has the relations “episodes_written” and “profession”. This method can be viewed as exploiting entity-relation collocation for entity linking.
Relation Detection
In this step, for each candidate entity $e \in EL_K^{\prime }(q)$ , we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB. Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$ 's entity mention in $q$ with a token “ $<$ e $>$ ”. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$ : $s_{rel} (r;e,q)$ .
Query Generation
Finally, the system outputs the $<$ entity, relation (or core-chain) $>$ pair $(\hat{e}, \hat{r})$ according to:
$$s(\hat{e}, \hat{r}; q) =& \max _{e \in EL_{K^{\prime }}^{^{\prime }}(q), r \in R_e} \left( \beta \cdot s_{\mathrm {rerank}}(e;q) \right. \nonumber \\ &\left.+ (1-\beta ) \cdot s_{\mathrm {rel}} (r;e,q) \right), \nonumber $$ (Eq. 19)
where $\beta $ is a hyperparameter to be tuned.
Constraint Detection
Similar to BIBREF4 , we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) Sub-graph generation: given the top scored query generated by the previous 3 steps, for each node $v$ (answer node or the CVT node like in Figure 1 (b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$ ) with any relation, and generate a sub-graph associated to the original query. (2) Entity-linking on sub-graph nodes: we compute a matching score between each $n$ -gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta $ (tuned on training set), we will add the constraint entity $c$ (and $r_c$ ) to the query by attaching it to the corresponding node $v$ on the core-chain.
Experiments
Task Introduction & Settings
We use the SimpleQuestions BIBREF2 and WebQSP BIBREF25 datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task.
SimpleQuestions (SQ): It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) BIBREF2 , in order to compare with previous research. yin2016simple also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results. Therefore, our results can be compared with their reported results on both tasks.
WebQSP (WQ): A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following yih-EtAl:2016:P16-2, we use S-MART BIBREF26 entity-linking outputs. In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set. For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\le $ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples.
We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs ({50, 100, 200, 400}); (2) learning rate ({0.1, 0.5, 1.0, 2.0}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section "Hierarchical Matching between Relation and Question" ); and (4) the number of training epochs.
For both the relation detection experiments and the second-step relation detection in KBQA, we have entity replacement first (see Section "Relation Detection" and Figure 1 ). All word vectors are initialized with 300- $d$ pretrained word embeddings BIBREF27 . The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work.
Relation Detection Results
Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively).
Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions.
The bottom of Table 2 shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3% vs. 91.2/88.8%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section "Hierarchical Matching between Relation and Question" ). For the attention-based baseline, we tried the model from BIBREF24 and its one-way variations, where the one-way model gives better results. Note that residual learning significantly helps on WebQSP (80.65% to 82.53%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching.
Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies.
Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94%) compared to HR-BiLSTM (95.67%), suffering from training difficulty.
Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, training deep BiLSTMs is more difficult without shortcut connections. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99%, even lower than the 95.25% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty.
Finally, we hypothesize that HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM.
KBQA End-Task Results
Table 3 compares our system with two published baselines (1) STAGG BIBREF4 , the state-of-the-art on WebQSP and (2) AMPCNN BIBREF20 , the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table 3 ).
Compared to the baseline relation detector (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art.
The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in BIBREF4 , constraint detection is crucial for our system. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5% top-1 accuracy), leaving a huge potential (77.5% vs. 58.0%) for the constraint detection module to improve.
Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section "Relation Detection Results" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP.
Conclusion
KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in BIBREF31 , to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions BIBREF32 and ComplexQuestions BIBREF30 to handle more characteristics of general QA.
|
On which benchmarks they achieve the state of the art?
|
SimpleQuestions, WebQSP
| 4,524
|
qasper
|
8k
|
Current address: Division of Brain Sciences, Department of Medicine, Imperial College London, London, United Kingdom.
In a variety of species, reduced food intake, and in particular protein or amino acid (AA) restriction, extends lifespan and healthspan. However, the underlying epigenetic and/or transcriptional mechanisms are largely unknown, and dissection of specific pathways in cultured cells may contribute to filling this gap. We have previously shown that, in mammalian cells, deprivation of essential AAs (methionine/cysteine or tyrosine) leads to the transcriptional reactivation of integrated silenced transgenes, including plasmid and retroviral vectors and latent HIV-1 provirus, by a process involving epigenetic chromatic remodeling and histone acetylation. Here we show that the deprivation of methionine/cysteine also leads to the transcriptional upregulation of endogenous retroviruses, suggesting that essential AA starvation affects the expression not only of exogenous non-native DNA sequences, but also of endogenous anciently-integrated and silenced parasitic elements of the genome. Moreover, we show that the transgene reactivation response is highly conserved in different mammalian cell types, and it is reproducible with deprivation of most essential AAs. The General Control Non-derepressible 2 (GCN2) kinase and the downstream integrated stress response represent the best candidates mediating this process; however, by pharmacological approaches, RNA interference and genomic editing, we demonstrate that they are not implicated. Instead, the response requires MEK/ERK and/or JNK activity and is reproduced by ribosomal inhibitors, suggesting that it is triggered by a novel nutrient-sensing and signaling pathway, initiated by translational block at the ribosome, and independent of mTOR and GCN2. Overall, these findings point to a general transcriptional response to essential AA deprivation, which affects the expression of non-native genomic sequences, with relevant implications for the epigenetic/transcriptional effects of AA restriction in health and disease.
Copyright: © 2018 De Vito et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files. RNAseq data are available in the ArrayExpress database under the accession number E-MTAB-6452.
Funding: This study was funded by the Ajinomoto Innovation Alliance Program, (AIAP; https://www.ajinomoto.com/en/rd/AIAP/index.html#aiap) (to M.V.S and D.G), which is a joint research initiative of Ajinomoto Co., Inc., Japan. One of the authors [M.B.] is an employee of Ajinomoto Co., and his specific roles are articulated in the ‘author contributions’ section. The commercial funder provided support in the form of salary for author [M.B.] and some of the necessary research materials (medium for cell culture), but did not have any additional role in the study design, data collection and analysis, or preparation of the manuscript, and the authors had unrestricted access to the data. Due to a confidentiality agreement, the commercial funder participated only in the decision to publish the data obtained during the study, without any restriction.
Competing interests: This study was funded by Ajinomoto Co., Inc., Japan and one of the authors [M.B.] is an employee of this commercial funder. No other employment or consultancy relationships exist with the commercial funder, and no patents, products in development, or marketed products result from this study. The authors declare that no competing interests exist and that the commercial affiliation of one of the authors does not alter the adherence of authors to all PLOS ONE policies on sharing data and materials.
In animals, excessive, insufficient, or imbalanced nutrient availability is known to strongly impact on phenotype and health, both short and long-term, and across generations [1, 2]. In particular, studies in yeast, animal models and humans have shown that reduced food intake, reducing either overall calories, or only sugars, proteins, or even single amino acids (AA), such as Methionine (Met), may extend lifespan and healthspan, and reduce the risk of cancer and other age-related diseases [3–9]. In addition, fasting or specific AA deprivation have shown potential therapeutic applications, owing to their ability to directly reduce the growth of some tumor types [10, 11], sensitize cancer cells to chemo- or immunotherapy [12, 13], and allow efficient hematopoietic stem cell engraftment . However, little is known about the specific processes and molecular mechanisms mediating the roles of nutrient restriction in human health and longevity.
A properly balanced diet in metazoans contains optimal amounts of a subset of AA, which cannot be synthetized de novo and are therefore named essential amino acids (EAAs). In humans these include Met, Histidine (His), Isoleucine (Ile), Leucine (Leu), Lysine (Lys), Phenylalanine (Phe), Threonine (Thr), Tryptophan (Trp), and Valine (Val), while a few others are considered as semi-essential, such as Glutamine (Gln) and Tyrosine (Tyr) [15, 16]. Consistently, EAA deprivation triggers a cell-autonomous adaptive response, characterized by extensive metabolic and gene expression modifications, implementing biosynthetic, catabolic, and plasma membrane transport processes, aimed at reconstituting the full AA complement [17, 18]. The best known and conserved pathways responding to AA deprivation are triggered by mechanistic Target of Rapamycin Complex 1 (mTORC1) and General amino acid Control Non-derepressible 2 (GCN2) protein kinases [15, 19, 20]. Activation of mTORC1 requires in particular the presence of Gln, Arg and Leu, but also Met , which activate the kinase through sensors mainly acting upstream of Rag GTPases at lysosomal membranes . In turn, mTORC1 promotes cell growth, proliferation and anabolism upon activation, and translational attenuation and autophagy upon inhibition [19, 20].
By contrast, GCN2 is activated by deprivation of any individual EAA, by means of its histidyl-tRNA synthetase-related domain, which binds uncharged tRNAs accumulating during AA limitation [23, 24]. Upon activation, GCN2 phosphorylates and inhibits its only known downstream target, namely the eukaryotic Initiation Factor 2 α (eIF2α), thereby initiating the Integrated Stress Response (ISR). This leads to attenuation of general translation, and induction of a transcriptional/translational program, aimed at increasing stress resistance and restoring cell homeostasis, by upregulating a specific subset of genes, including Activating Transcription Factor 4 (ATF4) and C/EBP-Homologous Protein (CHOP) [25–27]. Thus, inhibition of mTORC1 and activation of GCN2 by AA restriction cooperate to attenuate general translation at the initiation step, increase catabolism and turnover, and enhance stress resistance to promote adaptation . However, how these processes eventually induce protective mechanisms against the alterations associated with aging, which include pervasive epigenetic and transcriptional changes [28, 29], remains largely unknown.
We previously reported the unexpected observation that prolonged deprivation of either Tyr, or of both Methionine and Cysteine (Met/Cys), triggers the selective and reversible reactivation of exogenous transcriptional units, including plasmids, retroviral vectors and proviruses, integrated into the genome and transcriptionally repressed by defensive mechanisms against non-native DNA sequences [30, 31]. This phenomenon was observed both in HeLa epithelial and ACH-2 lymphocytic human cells, and was independent of the transgene or provirus (Ocular Albinism type 1, OA1; Green Fluorescent Protein, GFP; Lysosomal-Associated Membrane Protein 1, LAMP1; Human Immunodeficiency Virus-1, HIV-1), or of the exogenous promoter driving their transcription, either viral (cytomegalovirus, CMV; Long Terminal Repeat, LTR) or human (Phospho-Glycerate Kinase 1, PGK1; Elongation Factor-1α, EF-1α) . Furthermore, this transgene reactivation response was not reproduced by serum starvation, activation of p38, or pharmacological inhibitors of mTOR (PP242 or rapamycin), sirtuins and DNA methylation. By contrast, it was induced by pan histone deacetylase (HDAC) inhibitors, and by selective inhibitors of class II HDACs . Consistently, we found that the mechanism responsible involves epigenetic modifications at the transgene promoter, including reduced nucleosome occupancy and increased histone acetylation, and is mediated in part by reduced expression of a class II HDAC, namely HDAC4 .
These findings indicate that AA deprivation induces a specific epigenetic and transcriptional response, affecting the expression of newly-integrated exogenous transgenes and proviruses, and suggesting that endogenous sequences sharing similar structural and functional features may represent a transcriptional target as well [30, 31]. In particular, transposable elements, such as LTR-retrotransposons (or endogenous retroviruses, ERVs), are genomic “parasites” anciently-integrated into the genome, and silenced by epigenetic mechanisms of mammalian cells against the spreading of mobile elements, eventually becoming "endogenized" during evolution [32, 33]. This raises the question of whether their expression is also sensitive to AA restriction. In addition, it remains unclear whether or not the transgene reactivation response is related to specific AA deprivations, and most importantly which is the AA sensing/signaling pathway involved, in particular whether the GCN2 kinase is implicated. Thus, here we used the reactivation of silenced transgenes in cultured cells, as a model to investigate a novel molecular pathway induced by imbalanced EAA starvation, implicated in the epigenetic/transcriptional regulation of exogenous non-native DNA sequences and possibly of other endogenous anciently-integrated genomic elements.
HeLa human epithelial carcinoma, HepG2 human hepatocellular carcinoma and C2C12 mouse skeletal muscle cells were maintained in DMEM containing glutaMAX (Invitrogen) and supplemented with 10% FBS (Sigma), 100 U/ml penicillin G (Invitrogen), 100 mg/ml streptomycin (Invitrogen), at 37°C in a 5% CO2 humidified atmosphere. Cell lines carrying integrated and partially silenced transgenes were also maintained in 600–1000 μg/ml G418.
The C2C12 cell line was provided by ATCC. HeLa and HepG2 cells were obtained by Drs. F. Blasi and G. Tonon at San Raffaele Scientific Institute, Milan, Italy, respectively, and were authenticated by Short Tandem Repeat (STR) profiling, using the Cell ID System kit (Promega), according to the manufacturer’s instructions. Briefly, STR-based multiplex PCR was carried out in a final volume of 25 μL/reaction, including 5 μL Cell ID Enzyme Mix 5X, 2.5 μL Cell ID Primer Mix 10X and 3 ng of template DNA. The thermal cycling conditions were: 1 cycle at 96°C for 2 min, followed by 32 cycles at 94°C for 30 sec, 62°C for 90 sec, and 72°C for 90 sec, and 1 cycle at 60°C for 45 sec. The following STR loci were amplified: AMEL, CSF1PO, D13S317, D16S539, D21S11, D5S818, D7S820, TH01, TPOX, vWA. Fragment length analysis of STR-PCR products was performed by Eurofins Genomics, using standard procedures of capillary electrophoresis on the Applied Biosystems 3130 XL sequencing machine, and assessment of the STR profile was performed at the online STR matching analysis service provided at http://www.dsmz.de/fp/cgi-bin/str.html.
Stable cell clones, expressing myc-tagged human OA1 (GPR143) or GFP transcripts, were generated using pcDNA3.1/OA1myc-His or pcDNA3.1/EGFP vectors . Briefly, HeLa, HepG2 and C2C12 cells were transfected using FuGENE 6 (Roche) and selected with 800, 1000, and 650 μg/ml of G418 (Sigma), respectively, which was maintained thereafter to avoid loss of plasmid integration. G418-resistant clones were isolated and analyzed for protein expression by epifluorescence and/or immunoblotting.
Full DMEM-based medium, carrying the entire AA complement, and media deprived of Met/Cys (both AAs), Met (only), Cys (only), Alanine (Ala), Thr, Gln, Val, Leu, Tyr, Trp, Lys and His were prepared using the Nutrition free DMEM (cat.#09077–05, from Nacalai Tesque, Inc., Kyoto, Japan), by adding Glucose, NaHCO3, and either all 20 AAs (for full medium) or 18–19 AAs only (for deprivations of two-one AAs). Single AAs, Glucose, and NaHCO3 were from Sigma. Further details and amounts utilized are indicated in S1 Table. All media were supplemented with 10% dialyzed FBS (Invitrogen), 100 U/ml penicillin G (Invitrogen), 100 mg/ml streptomycin (Invitrogen), and G418 as required. HBSS was from Invitrogen. Cells were seeded at 10–30% of confluency; cells to be starved for 48 h were plated 2–3 times more confluent compared to the control. The following day, cells were washed and cultured in the appropriate medium, with or without EAA, for 24–48 h.
L-Histidinol (HisOH), PP242, Integrated Stress Response Inhibitor (ISRIB), SP600125, Cycloheximide (CHX) were from Sigma; Salubrinal was from Tocris Bioscience; U0126 was from Promega. Drugs were used at the following final concentrations: HisOH at 4–16 mM; PP242 at 1–3 μM; ISRIB at 100 nM; SP600125 at 20 μM in HepG2 cells and 50 μM in HeLa cells; Cycloheximide (CHX) at 50 ug/ml in HepG2 cells and 100 ug/ml in HeLa cells; Salubrinal at 75 μM; U0126 at 50 μM. Vehicle was used as mock control. Treatments with drugs to be tested for their ability to inhibit transgene reactivation (ISRIB, SP600125 and U0126) were initiated 1h before the subsequent addition of L-Histidinol (ISRIB) or the subsequent depletion of Met/Cys (SP600125 and U0126).
Total RNA was purified using the RNeasy Mini kit (Qiagen), according to manufacturer’s instructions. RNA concentration was determined by Nanodrop 8000 Spectrophotometer (Thermo Scientific). Equal amount (1 μg) of RNA from HeLa, HepG2 and C2C12 cells was reverse transcribed using the SuperScript First-Strand Synthesis System for RT-PCR (Invitrogen) using oligo-dT as primers, and diluted to 5 ng/μl. The cDNA (2 μl) was amplified by real-time PCR using SYBR green Master Mix on a Light Cycler 480 (Roche), according to manufacturer’s instructions. The thermal cycling conditions were: 1 cycle at 95°C for 5 min, followed by 40–45 cycles at 95° for 20 sec, 56° for 20 sec and 72° for 20 sec. The sequences, efficiencies and annealing temperatures of the primers are provided in S2 Table. Data were analyzed with Microsoft Excel using the formula EtargetΔct target (control-sample) /EreferenceΔct reference (control-sample) . Reference genes for normalizations were ARPC2 (actin-related protein 2/3 complex, subunit 2) for HeLa and HepG2 cells; and Actb (actin beta) for C2C12 cells, unless otherwise indicated.
siRNA (Mission esiRNA, 200 ng/μL; Sigma) against ATF4 and GCN2 were designed against the targeted sequences NM_001675 and NM_001013703, respectively. Cells seeded in 6-well plates were transfected with 1 μg of siRNAs and 5 μL of Lipofectamine 2000 (Invitrogen), following manufacturer’s instructions, at day 1 post-plating for ATF4 and at day 1 and 2 post-plating for GCN2. At day 2 (ATF4) or 3 (GCN2) post-plating, cells were washed and cultured in medium in the absence or presence of HisOH 4 mM for 6 h. siRNAs against RLuc (Sigma), targeting Renilla Luciferase, were used as negative control. For CRISPR/Cas9 experiments, we used the “all-in-one Cas9-reporter” vector, expressing GFP (Sigma), which is characterized by a single vector format including the Cas9 protein expression cassette and gRNA (guide RNA). GFP is co-expressed from the same mRNA as the Cas9 protein, enabling tracking of transfection efficiency and enrichment of transfected cells by fluorescence activated cell sorting (FACS). The human U6 promoter drives gRNA expression, and the CMV promoter drives Cas9 and GFP expression. The oligonucleotide sequences for the three gRNAs targeting GCN2 exon 1 or 6 are listed in S2 Table. We transfected HeLa and HepG2 cells with these plasmids individually (one plasmid one guide) and sorted the GFP-positive, transfected cells by FACS. Screening GCN2-KO clones was performed by western blotting. In the case of HepG2-OA1 cells, two rounds of selection were necessary to obtain three GCN2-KO clones by using a guide RNA against exon 1. Compared to the original HepG2-OA1 cell line and to the clone resulting from the first round of selection (185#27), the selected clones E23, F22 and F27 showed a very low amount—if any—of residual GCN2 protein (see results).
Genomic DNA of HeLa and HepG2 cells was purified using DNeasy Blood and Tissue kit (Qiagen), according to the manufacturer’s instructions. DNA concentration was determined by Nanodrop 8000 Spectrophotometer (Thermo Scientific). PCR conditions for amplification of GCN2 exon 1 and 6 were as follows: 1 cycle at 94°C for 5 min, followed by 35 cycles at 94°C for 40 sec, 56°C for 40 sec, and 72°C for 40 sec; and a final extension step of 5 min at 72°C. The primer sequences are provided in S2 Table.
For OA1, western immunoblotting was carried out as described . For GCN2, cells were lysed in RIPA buffer, boiled at 95°C for 5 min and resolved on a 7.5% polyacrylamide gel; immunoblotting was then performed following standard procedures. Primary Abs were as follows: anti-human OA1, previously developed by our group in rabbits ; anti-GCN2 (Cell Signaling, Cat. #3302).
Statistical analyses were performed using Microsoft Excel for Mac (version 15.32, Microsoft) for Student’s t-test; or GraphPad Prism (version 5.0d for Mac, GraphPad Software, Inc.) for one-way analysis of variance (ANOVA), followed by Dunnett’s or Tukey’s multiple comparisons post-tests. T-test was used when only two means, typically sample versus control, were compared, as specified in the figure legends. One way ANOVA was used for multiple comparisons, followed by either a Dunnett’s (to compare every mean to a control mean), or a Tukey’s (to compare every mean with every other mean) post-test, by setting the significance level at 0.05 (95% confidence intervals). Both tests compare the difference between means to the amount of scatter, quantified using information from all the groups. Specifically, Prism computes the Tukey-Kramer test, allowing unequal sample sizes. P values in Figures are generally referred to comparison between a sample and the control (full medium/mock), and are indicated as follows: *P<0.05, **P<0.01, ***P<0.001. Comparisons not involving the control are similarly indicated, by a horizontal line at the top of the graphs, encompassing the two samples under analysis. Additional details regarding the specific experiments are reported in the Figure Legends.
To examine the expression behavior of genomic repeats upon AA starvation, we performed a transcriptomic analysis taking advantage of an intramural sequencing facility. HeLa-OA1 cells were cultured in normal medium (for 6-30-120 hours) or in absence of Met/Cys (for 6-15-30-72-120 hours). Total RNA was prepared using Trizol (Sigma) to preserve transcripts of both small and long sizes (from Alu, of about 0.3 kb, to Long Interspersed Nuclear Elements, LINEs, and ERVs, up to 6–8 kb long), DNase treated to avoid contamination of genomic DNA, and processed for NGS sequencing by Ovation RNA-Seq System V2 protocol and HiSeq 2000 apparatus. Raw sequence data (10–20 M reads/sample) were aligned to the human genome (build hg19) with SOAPSplice . Read counts over repeated regions, defined by RepeatMasker track from UCSC genome browser , were obtained using bedtools suite . Normalization factors and read dispersion (d) were estimated with edgeR , variation of abundance during time was analyzed using maSigPro package , fitting with a negative binomial distribution (Θ = 1/d, Q = 0.01), with a cutoff on stepwise regression fit r2 = 0.7. Read counts were transformed to RPKM for visualization purposes. The expression of the OA1 transgene and HDAC4, which are progressively up- and down-regulated during starvation, respectively , were used as internal controls.
For genomic repeat analysis, reads belonging to repetitive elements were classified according to RepeatMasker and assigned to repeat classes (total number in the genome = 21), families (total number in the genome = 56) and finally subfamilies (total number in the genome = 1396), each including a variable number of genomic loci (from a few hundred for endogenous retroviruses, up to several thousand for Alu). Repeat subfamilies were then clustered according to their expression pattern in starved vs control cells, by maSigPro using default parameters, and repeats classes or families that are significantly enriched in each cluster, compared to all genomic repeats, were identified by applying a Fisher Exact test (using scipy.stats, a statistical module of Python). Alternatively, differentially expressed repeat subfamilies were identified by averaging three time points of starvation (15-30-72 h) and controls. Repeats significantly up- or downregulated (104 and 77, respectively) were selected based on a P value <0.05 (unpaired two-tailed Student’s t-test, assuming equal variance), and analyzed for their class enrichment by a Fisher Exact test as described above.
For gene set enrichment analysis of Met/Cys deprived vs control HeLa cells, differentially expressed genes were selected considering three time points of starvation (15-30-72 h) and controls, based on a P value <0.05 (unpaired two-tailed Student’s t-test, assuming equal variance) and a fold change >2. This led to a total of 2033 differentially expressed genes, 996 upregulated and 1037 downregulated. The enrichment analysis was performed separately for up and down regulated genes, or with all differentially expressed genes together (both), using the KEGG database. The analysis was performed with correction for the background of all expressed genes (about 13600 genes showing an average expression over 3 starvation and 3 control samples of at least 5 counts) and by using default parameters (adjusted P value and q-value cut-off of <0.05 and 0.2, respectively). Differentially expressed genes were also selected considering all starvation time points, as with genomic repeats, by maSigPro using default parameters, and a fold change of at least 1.5, leading to similar enrichment results (not shown). RNAseq gene expression data are available in the ArrayExpress database under the accession number E-MTAB-6452.
To provide proof-of-principle that AA starvation may affect the expression of transposable elements, we performed an RNAseq analysis of the previously described HeLa-OA1 cells, carrying an integrated and partially silenced OA1 transgene . Since the reactivation of the transgene by starvation is a progressive phenomenon , we performed a time-course experiment, where each time point represents one biological sample, rather than a biological triplicate of a single time point. To this aim, cells were cultured either in normal medium, or in absence of Met/Cys for different time points (6-15-30-72-120 hours), resulting in the progressive upregulation of the OA1 transgene during starvation (Fig 1A and 1B), consistent with previously published results . The expression of genomic repeats was determined according to RepeatMasker annotation and classification into classes, families, and subfamilies. Repeat species were then subjected to differential expression and enrichment analyses in starved vs control conditions. Out of 1396 annotated repeat subfamilies, 172 species displayed a differential expression profile during starvation.
Fig 1. Exogenous transgene and endogenous retroviruses are upregulated in Met/Cys-deprived HeLa cells.
(A,B) Exogenous integrated transgene (OA1) mRNA abundance in HeLa-OA1 cells, cultured in Met/Cys-deprived medium for the indicated time points, and analyzed by RNAseq (A), or RT-qPCR (B), compared to full medium. Data represent RPKM (A), or mean ± SD of 2 technical replicates, expressed as fold change vs. control (full medium at 6 h = 1) (B). (C) Clustering of 172 genomic repeat subfamilies, differentially expressed upon starvation, according to their expression profile. (D) Class distribution of repeat subfamilies belonging to differential expression clusters, compared to all genomic repeat subfamilies (first column). Class DNA includes DNA transposons; SINE includes Alu; LINE includes L1 an L2; LTR includes endogenous retroviruses and solitary LTRs; Satellite includes centromeric acrosomal and telomeric satellites; Others includes SVA, simple repeats, snRNA, and tRNAs. LTR-retroelements are significantly enriched among repeats that are upregulated upon starvation, while LINEs are significantly enriched among repeats that are downregulated. *P<0.05, ***P<0.001 (Fisher exact test).
As shown in Fig 1C, the clustering of differentially expressed repeats, according to their expression pattern, reveals profiles comparable to the behavior of the transgene in the same conditions, i.e. upregulation upon starvation and no change in regular medium (Cluster 1 and 2). In particular, Cluster 1 contains sequences that, similarly to the OA1 transgene, are progressively upregulated upon starvation (Fig 1A and 1C) , while Cluster 2 contains sequences that are upregulated at early time points. Interestingly, repeat families that are significantly enriched in these two clusters belong mostly to the group of LTR-retrotransposons, including ERV1, ERVK, ERVL, ERVL-MaLR and other LTR sequences (Fig 1D; S1A and S2A Figs). By contrast, DNA transposons (such as TcMar-Tigger) and L1 non-LTR retrotransposons are enriched among repeats that are downregulated during starvation, particularly at late time points (Clusters 3 and 4) (Fig 1D; S1A and S2A Figs). Consistent results were obtained by selecting significantly up- or downregulated genomic repeats (overall 181 species), based on their average expression out of three time points of starvation (15-30-72 h, when the transgene upregulation is more homogeneous) and controls, and on a P value <0.05 (S1B and S2B Figs). These findings suggest that EAA starvation induces genome-wide effects involving repetitive elements, and that—among major repeat classes—it upregulates in particular the expression of ERVs.
In addition, to obtain a general overview of main gene pathways changing their expression together with the transgene during AA starvation, we performed gene expression and enrichment analyses of regular genes, by considering three time points of starvation (15-30-72 h) and controls. Differentially expressed genes were selected based on a P value <0.05 and a fold change between means of at least 2, and analyzed with the EnrichR tool . As shown in Fig 2 and S1 File, enrichment analyses against the KEGG and Reactome databases reveals a predominance of downregulated pathways, namely ribosome and translation, proteasome, AA metabolism, oxidative phosphorylation and other pathways related to mitochondrial functions, which are affected in Huntington, Alzheimer and Parkinson diseases (http://www.genome.jp/kegg/pathway.html). In particular, a large fraction of ribosomal protein mRNAs is downregulated upon Met/Cys starvation (Fig 2A and 2C; S1 File), consistent with the notion that their genes–despite being scattered throughout the genome—are coordinately expressed in a variety of conditions . This reduced expression may depend on multiple pathways that control ribosome biogenesis in response to external stimuli, including the downregulation of Myc activity , the downregulation of mTORC1 [42, 44], or possibly the activation of the ISR, as described in yeast . By contrast, upregulated genes show a significant enrichment for transcription and gene expression (Fig 2B). Similar results were obtained by the Gene Ontology Biological Process (GO-BP) database (S1 File), overall indicating a general downregulation of translation and metabolism, and upregulation of transcription, during the time interval of Met/Cys starvation corresponding to the transgene upregulation.
Fig 2. Gene set enrichment analysis of Met/Cys-deprived HeLa cells.
Differentially expressed genes between three time points of starvation (15-30-72 h) and controls were selected based on a P value <0.05 and a fold change of at least 2, leading to a total of 996 upregulated, and 1037 downregulated genes. The enrichment analysis was performed separately for up and down regulated genes, using the EnrichR tool and the KEGG (A) and REACTOME (B, C) databases. Ranking is based on the combined score provided by EnrichR, and categories are displayed up to 20 items with an Adjusted P value <0.05. No significant categories were found with upregulated genes against the KEGG database. All data are shown in S1 File. The enrichment analysis using all differentially expressed genes together did not reveal any additional enriched process.
To characterize the pathway leading to the reactivation of silenced transgenes, we used HeLa-OA1 and HeLa-GFP cells, as described . In addition, to test cell types relevant for AA metabolism, such as liver and muscle, we generated clones of HepG2 human hepatoma and C2C12 mouse skeletal muscle cells, stably transfected with plasmids for OA1 and GFP transgenes, respectively (HepG2-OA1 and C2C12-GFP cells; endogenous OA1 is not expressed in any of these cell types). In all cases, the integrated transgenes are under the control of the CMV promoter in the context of a pcDNA3.1 plasmid, are partially silenced, and can be efficiently upregulated by HDAC inhibitors (trichostatin A, TSA; ref. and S3A, S3B and S4A Figs), indicating that their expression is controlled at least in part by epigenetic mechanisms, as previously described .
To establish whether the reactivation response results from the shortage of specific AAs only, such as Met/Cys, or it is triggered by any AA deprivations, we cultured HeLa-OA1, HeLa-GFP, HepG2-OA1 and C2C12-GFP cells for 24–48 hours with a battery of media deprived of EAAs or semi-EAAs, including Met/Cys, Thr, Gln, Val, Leu, Tyr, Trp, Lys, and His. As negative controls, cells were cultured in full medium, carrying the entire AA complement, and in a medium deprived of Ala, a non-essential AA. The expression of the transgene transcript was then evaluated by RT-qPCR. As shown in Fig 3, and in S3C and S4B Figs, most EAA-deficiencies induced reactivation of the OA1 or GFP transgenes in all four cell lines, with the notable exception of Trp deprivation, which consistently resulted in no or minimal reactivation of the transgenes. Indeed, despite some variability, Met/Cys deficiency, but also Thr, Val, Tyr, and His deprivation always gave an efficient response, while Leu, Gln and Lys elicited evident responses in some cases, but not in others. Depletion of Phe gave results comparable to Tyr deprivation, however it significantly altered multiple reference genes used for normalization and therefore was eventually omitted from the analysis (not shown). Finally, in the above experiments we used a combined Met/Cys deficiency, to avoid the potential sparing of Met by Cys and for consistency with our previous studies . Nevertheless, the analysis of single Met or Cys starvation, both at the protein and transcript levels, revealed an exclusive role of Met deprivation in transgene reactivation, consistent with the notion that Cys is not an EAA (S3D and S3E Fig).
Fig 3. EAA deprivation induces reactivation of silent transgenes in HeLa and HepG2 cells.
Relative transgene (OA1) and CHOP mRNA abundance in HeLa-OA1 (A) and HepG2-OA1 (B) cells, cultured in various AA-deprived media for 48 h and 24 h, respectively, compared to full medium. Mean ± SEM of 3 independent experiments. Data are expressed as fold change vs. control (full medium = 1). *P<0.05, **P<0.01, ***P<0.001 (one way ANOVA, followed by Dunnett’s post-test vs. full medium).
Collectively, these results indicate that transgene reactivation by EAA starvation is reproducible with most EAAs, shared by different cell types (epithelium, liver, and skeletal muscle), and conserved in different mammalian species (human, mouse).
mTORC1 inhibition and GCN2 activation trigger the best-known signaling pathways responding to AA starvation . We previously showed that inhibition of mTORC1 is not sufficient to reproduce transgene reactivation in HeLa cells . By contrast, the involvement of GCN2 and the ISR, including the downstream effectors ATF4 and CHOP, has never been tested. In addition, this pathway has been typically assessed in transient assays, lasting for a few hours, which may not be comparable with the prolonged starvation conditions necessary to reactivate the transgene expression (at least 15–24 h). Thus, we tested whether CHOP expression was upregulated upon incubation of HeLa-OA1, HepG2-OA1 and C2C12-GFP cells in media deprived of different EAAs for 24–48 h.
As shown in Fig 3 and S4B Fig, we found that CHOP expression is increased in all EAA-starvation conditions, but not in the absence of Ala, in all tested cell lines. Similar, yet less pronounced, results were obtained with ATF4, consistent with the notion that activation of this transcription factor is mainly mediated by translational upregulation (not shown) [15, 26]. However, the upregulation of CHOP does not parallel quantitatively that of the transgene, neither appears sufficient to induce it. In fact, CHOP is highly upregulated even upon Trp starvation, which consistently results in no or minimal reactivation of the transgenes (compare CHOP with OA1 or GFP expression; Fig 3 and S4B Fig). Thus, while the ISR appears widely activated upon EAA starvation, the upregulation of its downstream effector CHOP only partly correlates with transgene reactivation and may not be sufficient to induce it.
The activation of the ISR upon AA starvation suggests that GCN2 may be involved in the transgene reactivation response. Therefore, we tested whether direct pharmacological activation of this kinase is sufficient to trigger the transgene reactivation similarly to starvation. In addition, we used pharmacological inhibitors of mTOR to corroborate previous negative results in HeLa cells in the other cell lines under study. To this aim, HeLa-OA1 or GFP, HepG2-OA1 and C2C12-GFP cells were cultured in the presence of different concentrations of PP242 (mTOR inhibitor) or L-Histidinol (GCN2 activator, inhibiting tRNAHis charging by histidyl-tRNA synthetase), either alone or in combination for 24 h, compared to Met/Cys-deprived and full medium. As shown in Fig 4 and S5 Fig, while inhibition of mTORC1 consistently leads to minor or no effects, in agreement with previous findings , treatment with L-Histidinol results in efficient reactivation of the transgene in HepG2-OA1 and C2C12-GFP cells, but not in HeLa cells.
Fig 4. mTOR inhibition and GCN2 activation differently affect transgene expression in HeLa and HepG2 cells.
Relative transgene (OA1) and CHOP mRNA abundance in HeLa-OA1 (A) and HepG2-OA1 (B) cells, cultured in Met/Cys-deprived medium, or in the presence of PP242 (mTOR inhibitor; 1–3 μM) or L-Histidinol (HisOH, GCN2 activator; 4–16 mM), either alone or in combination for 24–48 h, compared to full medium. Mean ± SEM of 4 (A) or 3 (B) independent experiments. Data are expressed as fold change vs. control (full medium = 1). *P<0.05, **P<0.01, ***P<0.001 (one way ANOVA, followed by Dunnett’s post-test vs. full medium). PP-1 and PP-3, PP242 at 1 and 3 μM, respectively; HisOH-4 and HisOH-16, L-Histidinol at 4 and 16 mM, respectively.
Specifically, L-Histidinol is not effective in HeLa-OA1 and HeLa-GFP cells, either alone or in combination with PP242 (Fig 4A and S5A Fig), or by using different concentrations of the drug, with or without serum (not shown). In these cells, L-Histidinol appears also unable to trigger the ISR, as indicated by lack of CHOP upregulation, possibly due to their different sensitivity to the drug. These findings are consistent with previous reports, describing the use of L-Histidinol in HeLa cells in conditions of low His concentration in the culture medium , which would resemble AA starvation in our system and therefore may not be applicable. Thus, even though the amount of the amino alcohol was adapted to exceed 20 to 80 times that of the amino acid, as described , HeLa cells may be resistant or able to compensate.
In contrast, in other cell types, L-Histidinol has been utilized in regular DMEM, to mimic the AA response triggered by DMEM lacking His [48, 49]. Consistently, in HepG2-OA1 cells, L-Histidinol is sufficient to elicit extremely high levels of transgene reactivation, and its combination with PP242 results in additive or even synergistic effects, possibly due to an indirect effect of mTOR inhibition on GCN2 activity (Fig 4B) [50, 51]. Similarly, C2C12-GFP cells efficiently reactivate the transgene upon treatment with L-Histidinol, but not PP242 (S5B Fig). However, differently from HepG2-OA1 cells, simultaneous treatment of C2C12-GFP cells with L-Histidinol and PP242 does not lead to synergistic effects. Consistent with stimulation of the ISR, CHOP and to a minor extent ATF4 are upregulated by L-Histidinol in both cell lines, yet their expression levels show only an incomplete correlation with those of the transgene (Fig 4B, S5B Fig, and not shown).
The finding that GCN2 activation by L-Histidinol is sufficient to reactivate the transgenes in both HepG2-OA1 and C2C12-GFP cells pointed to this kinase, and to the downstream ISR, as the pathway possibly involved in the EAA starvation response. Thus, we investigated whether the ISR is sufficient to trigger upregulation of the OA1 transgene in HepG2-OA1 cells by pharmacological means. As CHOP expression does not correspond quantitatively and is not sufficient to induce transgene reactivation, we tested the role of the core upstream event of the ISR, namely the phosphorylation of eIF2α , which can be induced by pharmacological treatments, independent of GCN2 (Fig 5A). To this aim, we used Salubrinal, a specific phosphatase inhibitor that blocks both constitutive and ER stress-induced phosphatase complexes against eIF2α, thereby increasing its phosphorylation . We found that, while the ISR is activated upon Salubrinal treatment, as shown by increased CHOP expression, it does not induce OA1 transgene reactivation (Fig 5B).
Fig 5. The ISR is neither sufficient nor necessary to induce transgene reactivation in HepG2 cells.
(A) Schematic representation of GCN2 activation by AA starvation, resulting in phosphorylation of eIF2a and initiation of the downstream ISR. In addition to GCN2, the ISR may be activated by other eIF2a kinases (PKR, HRI and PERK; not shown in the picture). (B) Relative transgene (OA1) and CHOP mRNA abundance in HepG2-OA1 cells treated for 24 h with Salubrinal (a drug that induces the ISR by inhibiting the dephosphorylation of eIF2α; 75 μM), compared to full medium. Mean ± range of two experiments. Data are expressed as fold change vs. control (DMEM = 1). *P<0.05 (paired two-tailed Student’s t-test vs. control). (C) Relative transgene (OA1) and CHOP mRNA abundance in HepG2-OA1 cells treated for 6 h with L-Histidinol (HisOH, GCN2 activator; 4 mM), in the absence or presence of ISRIB (a drug that bypasses the phosphorylation of eIF2α, inhibiting triggering of the ISR; 100 nM). Mean ± range of two experiments. Data are expressed as fold change vs. control (DMEM = 1). **P<0.01, ***P<0.001 (one way ANOVA, followed by Tukey’s post-test; P values refer to comparisons vs. control, unless otherwise indicated). (D) Relative transgene (OA1) and ATF4 mRNA abundance in HepG2-OA1 cells transfected with control (CTRL) or anti-ATF4 siRNAs, and incubated in the presence or absence of L-Histidinol (HisOH, GCN2 activator; 4 mM) for 6 h. Mean ± range of two experiments. Data are expressed as fold change vs. control (w/o HisOH = 1, top; control siRNA = 1, bottom). *P<0.05 (one way ANOVA, followed by Tukey’s post-test; P values refer to comparisons vs. control, unless otherwise indicated).
To test whether the ISR is necessary to trigger the transgene response to L-Histidinol, we used the chemical compound ISRIB, which inhibits the activation of the ISR, even in the presence of phosphorylated eIF2α, likely by boosting the activity of the guanine-nucleotide exchange factor (GEF) for eIF2α, namely eIF2B [53, 54]. HepG2-OA1 cells were stimulated with L-Histidinol, either in the presence or absence of ISRIB. As shown in Fig 5C, while the expression of CHOP is inhibited by ISRIB, as expected, the reactivation of the OA1 transgene is not affected. In addition, knockdown of the closest eIF2α downstream effector ATF4 by siRNAs does not interfere with the reactivation of the OA1 transgene by L-Histidinol (Fig 5D). Together, these data suggest that eIF2α phosphorylation and the downstream ISR pathway are neither sufficient nor necessary to induce transgene reactivation.
To definitively establish if GCN2 is necessary to trigger the transgene reactivation response to EAA starvation, we directly suppressed its expression by CRISPR/Cas9-mediated knock-out (KO). We generated three independent GCN2-KO clones from the parental HeLa-OA1 cell line, by using three different guide RNAs, two against exon 1 (clones 183#11 and 185#5), and one against exon 6 (clone 239#1) of the GCN2 gene. Genomic characterization confirmed the presence of mutations on both alleles of exon 1 of the GCN2 gene in clone 183#11, and on both alleles of exon 6 in clone 239#1; by contrast, clone 185#5 showed multiple alleles in exon 1, consistent with the presence of two cell populations, and was not characterized further at the genomic level (S6 Fig). None of these clones express GCN2 at the protein level, as shown by immunoblotting (Fig 6A). To test the GCN2-KO cells for their ability to respond to EAA starvation, parental HeLa-OA1 cells and the three GCN2-KO clones were cultured in media deprived of Met/Cys or Thr (corresponding to the most effective treatments in this cell line; see Fig 3A) for 24–48 h and transgene expression was assessed by RT-qPCR. We found that the reactivation of the OA1 transgene is neither abolished, nor reduced by KO of GCN2, thus excluding that this kinase is necessary for the response to EAA starvation in HeLa-OA1 cells (Fig 6B and 6C).
Fig 6. GCN2 knockout does not interfere with transgene reactivation in HeLa cells.
(A) Immunoblotting of protein extracts from the HeLa-OA1 parental cell line and GCN2-KO clones 183#11, 185#5 and 239#1, immunodecorated with anti-GCN2 antibody. Arrow, GCN2 specific band. Ponceau staining was used as loading control. (B, C) Relative transgene (OA1) mRNA abundance in HeLa-OA1 cells and GCN2-KO clones, cultured in Met/Cys (B) or Thr (C) deprived medium for 24 h or 48 h, respectively, compared to full medium. Mean ± SD of 3 technical replicates from 1 experiment. Data are expressed as fold change vs. control (full medium = 1). Since independent clones may display variable reactivation responses (e.g. due to different levels of transgene expression in basal conditions), the results are not shown as means of the three clones, but as separate replicates.
Similarly, we generated GCN2-KO clones from the parental HepG2-OA1 cell line by the same strategy. By using a guide RNA against exon 1 of the GCN2 gene, we obtained three independent GCN2-KO clones, namely E23, F22 and F27. Genomic characterization confirmed the presence of mutations on both alleles of exon 1 of the GCN2 gene in clone F27 (S7 Fig) and all three clones showed a very low amount—if any—of residual GCN2 protein, compared to the original HepG2-OA1 cell line (Fig 7A). To assess the ability of GCN2-KO cells to reactivate the transgene upon starvation, we cultured parental HepG2-OA1 cells and the three GCN2-KO clones in media deprived of Met/Cys or His (corresponding to the most effective treatments in this cell line; see Fig 3B) for 24 h, and evaluated the transgene expression by RT-qPCR. As shown in Fig 7B and 7C, we found that the reactivation of the OA1 transgene is neither abolished, nor reduced by KO of GCN2, as in HeLa cells. To further confirm this result, we knocked-down GCN2 by RNA interference (RNAi), and incubated the cells with or without L-Histidinol for 6 h. As shown in Fig 8, treatment of HepG2-OA1 cells with L-Histidinol results in efficient transgene reactivation, even upon significant GCN2 downregulation, both at the mRNA and protein levels. Taken together, these data strongly support the conclusion that GCN2 is not necessary for transgene reactivation in response to EAA starvation, either in HeLa or in HepG2 cells.
Fig 7. GCN2 knockout does not interfere with transgene reactivation in HepG2 cells.
(A) Immunoblotting of protein extracts from the HepG2-OA1 parental cell line and GCN2-KO clones 185#27, E23, F22, F27, immunodecorated with anti-GCN2 antibody. Clone 185#27 results from the first round of selection, and was used to generate clones E23, F22, F27. Arrow, GCN2 specific band. For GCN2 protein quantification, Ponceau staining was used as loading control and data are expressed as fold change vs. parental cell line (= 1). (B, C) Relative transgene (OA1) mRNA abundance in HepG2-OA1 cells and GCN2-KO clones, cultured in Met/Cys (B) or His (C) deprived medium for 24 h, compared to full medium. Mean ± SD of 3 technical replicates from 1 experiment.
|
Is the ISR necessary for transgene reactivation?
|
No, it is not necessary.
| 6,900
|
multifieldqa_en
|
8k
|
\section{Introduction}
Despite the rise of graphene and other 2D materials, semiconducting single-walled carbon nanotubes (SWNT) are still regarded as strong candidates for the next generation of high-performance ultrascaled transistors~\cite{Cao_IBM_2015,IBM_2017,3D_CNT_FET} as well as for opto-electronic devices~\cite{Review_Avouris,CNT_photonics} such as chip-scale electronic-photonic platforms~\cite{Pernice_2016} or low-threshold near-infrared tunable micro-lasers~\cite{Graf_2017}.
Engineering a quantum dot (QD) along a (suspended) semiconducting SWNT foreshadows promising opportunities in the field of quantum information processing and sensing through recently proposed schemes such as detection and manipulation of single spins via coupling to vibrational motion~\cite{Palyi_2012}, optomechanical cooling~\cite{Wilson_Rae_2012} as well as all optical manipulation of electron spins~\cite{Galland_all_optical_2008}. Furthermore, the quasi one-dimensional geometry of SWNTs allows for defining tunable p-n junctions induced by electrostatic doping through local gates~\cite{Buchs_JAP,tunable_pn_2011}. Combining a well-defined QD within such a p-n junction structure could constitute a crucial building-block for the realization of highly desirable electrically driven, on-demand single photon emitters operating at telecom wavelength, based $e.g.$ on a turnstile device architecture~\cite{turnstile_1994,turnstile_1999}.
In practice, QDs in carbon nanotubes have been reported predominantly for two different confinement structures: i) Engineered tunneling barriers at metal-nanotube contacts~\cite{Pablo04nat} and/or by gate electrodes, used \emph{e.g.} to manipulate single electron spins~\cite{Laird:2015}, ii) Unintentional localization potentials stemming from environmental disorder~\cite{Hofmann_2016}, allowing for single-photon emission mediated by localization of band-edge excitons to QD states~\cite{CNT_photonics,Hoegele_2008,Walden_Newman_2012,Hofmann_2013,Pernice_2016_2}. Both types of structures are usually operated at cryogenic temperature due to small energy scales ranging from a few to few tens of millielectronvolts.
\\
\indent Another technique for achieving confinement in SWNTs makes use of artificial defects such as covalently bound oxygen or aryl functionalization groups on the side walls of semiconducting SWNTs, inducing deep exciton trap states allowing for single-photon emission at room temperature~\cite{Htoon_2015,tunable_QD_defects}. Also, carrier confinement between defect pairs acting as strong scattering centers has been reported for mechanically induced defects~\cite{Postma_SET} as well as for ion-induced defects with reported level spacings up to 200 meV in metallic SWNTs~\cite{Buchs_PRL}. The latter technique, combined with recent progress in controlling defects structure and localization~\cite{Robertson_2012,Yoon_2016,Laser_writing_2017} offers a high potential for engineering a broad set of SWNT-based quantum devices operating at room temperature.
\\
\indent Here, we demonstrate confinement of electrons and holes in sub-10 nm QD structures defined by ion-induced defect pairs along the axis of semiconducting SWNTs. Using low temperature scanning tunneling microscopy and spectroscopy (STM/STS), bound states with level spacings of the order of 100 meV and larger are resolved in energy and space. By solving the one-dimensional Schr\"odinger equation over a piecewise constant potential model, the effects of asymmetric defect scattering strength as well as the influence of the Au(111) substrate such as terrace edges on the bound states structure are remarkably well reproduced. By means of ab-initio calculations based on density functional theory and Green's functions, we find that single (SV) and double vacancies (DV) as well as chemisorbed nitrogen ad-atoms are good candidates to produce QDs with the experimentally observed features. These simulations also allow to study the scattering profile as a function of energy for different defect combinations.
\section{Experimental section}
The experiments have been performed in a commercial (Omicron) low temperature STM setup operating at $\sim5$~K in ultra high vacuum. Topography images have been recorded in constant current mode with a grounded sample, using mechanically cut Pt/Ir tips. Differential conductance $dI/dV$ spectra, proportional in first approximation to the local density of states (LDOS)~\cite{Tersoff85} have been recorded using a lock-in amplifier technique. The LDOS spatial evolution along a nanotube axis is obtained by $dI/dV(x,V)$ maps built by a series of equidistant $dI/dV$ spectra. Spatial extent mismatches between topography images and consecutive $dI/dV(x,V)$ maps have been systematically corrected~\cite{Buchs_Ar}, and the metallic nature of the tip has been systematically checked on the gold substrate to prevent any tip artefacts before recording STM or/and STS data sets.
\\
\indent Nanotube samples were made of extremely pure high-pressure CO conversion (HiPCo) SWNTs~\cite{Smalley01} with a diameter distribution centered around 1 nm, FWHM $\sim$ 0.3 nm~\cite{Buchs_conf}. The measured intrinsic defect density was below one defect every 200 nm. SWNTs were deposited on atomically flat Au(111) surfaces from a 1,2-dichloroethane suspension, followed by an in-situ annealing process~\cite{Buchs_APL_07,Buchs_Ar}.
\\
\indent Local defects in SWNTs have been created in-situ by exposure to: (i) Medium energy $\sim$ 200 eV argon ions (Ar$^{+}$) produced by an ion gun \cite{Buchs_Ar,Buchs_PRL}, (ii) Low energy (few eV's) nitrogen ions (N$^{+}$) produced by a 2.45 GHz ECR plasma source~\cite{Buchs_APL_07,Buchs_NJP_07}. In both cases, the exposure parameters have been calibrated to reach an average defect separation along the SWNTs of about 10 nm~\cite{Buchs_Ar,Buchs_APL_07}.
\section{Results and discussion}
\subsection{Experimental LDOS patterns}
\begin{figure}
\includegraphics[width=8cm]{Figure_1.pdf}
\caption{\label{exp_data_1} (a)-(b) 3D topography images (processed with WSXM~\cite{WSXM}) of SWNT I with Ar$^{+}$ ions-induced defects, with sample-tip bias voltage ($V_\mathrm{S}$) 1 V and tunneling current $I_\mathrm{S}$ 0.1 nA. (c) Corresponding $dI/dV(x,V)$ map recorded along the horizontal dashed lines in (b), with $V_\mathrm{S}=1$ V, $I_\mathrm{S}=0.2$ nA. Spatial resolution $\sim$ 0.3 nm. (d) 3D topography image of SWNT II with N$^{+}$ ions-induced defects, with $V_\mathrm{S}=1$ V, $I_\mathrm{S}=128$ pA. (e) Corresponding $dI/dV(x,V)$ map recorded along the horizontal dashed lines in (d), with $V_\mathrm{S}=1.5$ V, $I_\mathrm{S}=0.3$ nA. Spatial resolution $\sim$ 0.2 nm.}
\end{figure}
In Fig.~\ref{exp_data_1} (a) and (b), we show 3D STM images of the same semiconducting SWNT (referred as SWNT I in the following) with Ar$^{+}$ ions-induced defect sites labeled $d1-d5$ . Panel (d) shows a 3D STM image of a second semiconducting SWNT (referred as SWNT II) with N$^{+}$ ions-induced defect sites labeled $d6-d7$. In both cases, defect sites typically appear as hillock-like protrusions with an apparent height ranging from 0.5~{\AA} to 4~{\AA} and an apparent lateral extension varying between 5~{\AA} and 30~{\AA}~\cite{Buchs_NJP_07,Buchs_Ar,Thesis_Buchs}.
\\
\indent The resulting $dI/dV(x,V)$ maps recorded along the horizontal dashed line drawn in the STM images (b) and (d) are displayed in panels (c) and (e) in Fig.~\ref{exp_data_1}, respectively. Defect signatures in the LDOS in both cases are characterized by deep in-gap states at the defects positions. This is consistent with the expected defect structures, $i.e.$ mainly SVs, DVs and combinations thereof for collisions with Ar$^{+}$ ions~\cite{Buchs_Ar} and bridgelike N ad-atom for collisions with N$^{+}$ ions~\cite{Thesis_Buchs,Nitrogen_prb_07}. Note that gap states at energy levels $\sim$~0.2 eV and $\sim$~0.05 eV in panels (c) and (e), respectively, are shifted to the right from $d3$ by about 1 nm and to the right from $d6$ by about 2 nm. This indicates the presence of intrinsic or ion-induced defects on the lateral or bottom side wall of the SWNTs~\cite{Kra01prb}, not visible in the topographic images. These defects are labelled $d3'$ and $d6'$, respectively.
\\
\begin{figure}
\includegraphics[width=12cm]{Figure_2.pdf}
\caption{\label{exp_data_Ar} (a)-(b) QD I detailed $dI/dV(x,V)$ maps in conduction and valence bands. Lower subpanels contain QD states linecut profiles and stationary wave-like fits in left and right QD parts. Right subpanels contain experimental energy dispersion relation data sets $k_\mathrm{n}(E_\mathrm{n})$ and tight-binding calculations. (c)-(d) Resulting LDOS calculated from a one-dimensional piecewise constant potential model featuring potential barriers and a potential step (gray area), with position of the potential step: 5.09 nm from the right barrier's center, potential step heigth: $U_\mathrm{C}=V_\mathrm{L}-V_\mathrm{R}=60$ meV, barrier heights: $V_\mathrm{d3'}=1$ eV, $V_\mathrm{d4}=0.85$ eV, barrier widths: $a_\mathrm{d3'}=a_\mathrm{d4}=3.4$ nm. Valence band: $V_\mathrm{d3'}=-0.4$ eV, $a_\mathrm{d3'}=a_\mathrm{d4}=2.5$ nm, $V_\mathrm{d4}=-0.4$ eV. $E_\mathrm{g}$ stands for bandgap energy.}
\end{figure}
\begin{figure}
\includegraphics[width=12cm]{Figure_3.pdf}
\caption{\label{exp_data_N} (a) QD II detailed $dI/dV(x,V)$ map. Lower subpanels contain QD states linecut profiles and stationary wave-like fits in the left and right QD parts. Right subpanel contains experimental energy dispersion relation data sets $k_\mathrm{n}(E_\mathrm{n})$ and tight-binding calculations. (b) Resulting LDOS calculated from a one-dimensional piecewise constant potential model featuring potential barriers and a potential step (gray area) with position of the potential step: 4.7 nm from the right barrier's center, potential step heigth: $U_\mathrm{C}=V_\mathrm{L}-V_\mathrm{R}=60$ meV, barrier heights: $V_\mathrm{d6'}=0.6$ eV, $V_\mathrm{d7}=0.6$ eV, barrier widths: $a_\mathrm{d6'}=1.5$ nm, $a_\mathrm{d7}=2.6$ nm.}
\end{figure}
\indent Remarkably, the $dI/dV(x,V)$ maps in Fig.~\ref{exp_data_1} exhibit several broad discrete states in the conduction bands of SWNT I, II (white dashed boxes in panel (c) and (e), respectively) and in the valence band of SWNT I (white dashed box in panel (c)), characterized by a modulation of the $dI/dV$ signals in the spatial direction between pairs of consecutive defect sites $d3'-d4$ and $d6'-d7$. Enlarged plots of these boxed regions are displayed in Fig.~\ref{exp_data_Ar}(a)-(b) and Fig.~\ref{exp_data_N}(a) for SWNTs I and II, respectively. In the conduction bands, cross-sectional curves recorded along the black horizontal dashed lines labelled m1--m3 in Fig.~\ref{exp_data_Ar}(a) and m1--m4 in Fig.~\ref{exp_data_N}(a) are plotted below the LDOS panels. These clearly reveal one to three and respectively one to four spatially equidistant maxima. The number of maxima increases for increasing $\left|V_\mathrm{bias}\right|$ and the measured level spacings between consecutive discrete states is of the order of 100 meV and larger for both cases. This indicates that defect sites $d3'-d4$ and $d6'-d7$, respectively separated by 12.1 nm and 11.2 nm, act as strong scattering centers able to confine carriers in semiconducting SWNTs~\cite{Buchs_PRL,Bercioux_prb_2011}. Such intrananotube QD structures will be referred as QD I (in SWNT I) and QD II (in SWNT II) in the following. We estimated the level spacings in the conduction band of QD I to 98 meV (m1-m2) and 116 meV (m2-m3). For QD II, we measured 122 meV (m1-m2), 185 meV (m2-m3) and 210 meV (m3-m4).
\\
\indent In the valence band of SWNT I, discrete states with level spacings of the order of 80-90 meV, with one clear maximum at the level m-1, can also be distinguished between defect sites $d3'-d4$ in Fig.~\ref{exp_data_Ar}(b). The discretization of the states indicates that this QD structure also confines holes. Discrete states starting from m-2 and lower show less well defined structures compared to the conduction band states. In the case of SWNT II, no clear discrete states are observed in the valence band (see supplementary information). These observations are most probably the result of an energy dependent scattering strength of the defects, respectively $d3'$-$d4$ and $d6'$-$d7$, leading here to a weaker confinement in the valence band. Such energy dependence is well known for metallic SWNTs~\cite{Chico96,vac_2007,mayrhofer:2011,Bockrath_Science01} and is corroborated by our ab-initio calculations. Note that mixing effects with defect states and substrate-induced effects~\cite{substrate_effects} cannot be ruled out.
\\
\indent Another remarkable feature in the LDOS is the strong spatial asymmetry of the lowest energy states m1 and m-1 in QD I and m1 in QD II. In QD I, m1 is shifted to the right side of the dot while m-1 is shifted to the left side. Higher states m2 and m3 show more symmetry in terms of position of the maxima relative to the center of the QD. In QD II, m1 is shifted to the right side of the QD. We attribute the observed lowest energy states asymmetry (for electrons as well as for holes) in part to their strong sensitivity to weak potential modulations within the QD structure (as we will show in section \ref{1D}). For QD I, this assertion is supported by the observation of a 0.25 nm high Au(111) terrace edge located around the center of the QD, leading to a supported-suspended interface (see white dashed lines in Fig.~\ref{exp_data_1}(b) and more topographic details in Fig.~S2(a)-(d) in supplementary information). Such configurations have been reported to induce a rigid shift in the SWNT bands~\cite{Clair_2011}, for instance here a down-shift in the right side of QD I corresponding to the "suspended" portion between two terraces. In QD II, we attribute the spatial shift of m1 to a potential modulation induced by a layer of disordered impurities, most probably residua from the 1,2-dichloroethane suspension, lying between the gold substrate and the SWNT (see Fig.~\ref{exp_data_1}(d) and Fig.~S2(e)-(h) in supplementary information).
\\
\indent Also, the LDOS in QD I and II (Fig.~\ref{exp_data_Ar}(a) and Fig.~\ref{exp_data_N}(a), respectively) reveals asymmetric patterns with curved stripes oriented from top left to bottom right for QD I and from bottom left to top right for QD II. These are characteristic signatures for defect pairs with different scattering strengths~\cite{Bercioux_prb_2011,Buchs_PRL}. For instance here, the left defect in QD I ($d3'$) has a larger scattering strength than the right one ($d4$), while the right defect in QD II ($d7$) has a larger scattering strength than the left one ($d6'$).
\\
\indent The exact atomic structure of the defects could in principle be determined from a comparison of $dI/dV$ spectra with simulated first-principle LDOS signatures of expected defect types. In reality, this is hampered by the large number of possible geometries to simulate, including complex multiple defect structures~\cite{Buchs_Ar}, together with the large unit cells of the semiconducting chiral SWNTs studied here.
\\
\subsection{1D piecewise constant potential model}
\label{1D}
To better understand the physical origins of the non-trivial signatures of the quantized states, we model the experimental $dI/dV$ maps by solving the time independent one-dimensional Schr\"odinger equation over a piecewise constant potential model of QD I and QD II. The scattering centers are approximated by semi-transparent rectangular tunneling barriers leading to a square confinement potential~\cite{Laird:2015}. This is supported by previous results on defect-induced confinement in metallic SWNTs using the same experimental conditions~\cite{Buchs_PRL} and is consistent with ab-initio simulations presented later in this work. The potential modulation within the QD is approximated by a potential step. The resulting potential geometries are illustrated with gray shaded areas in Fig.~\ref{exp_data_Ar} (c) and (d) and Fig.~\ref{exp_data_N}(b). Dispersion relations $E(k)$ can be extracted experimentally from the quantized states wavefunctions by measuring the energy and corresponding momenta in the left and right sides of the QDs. The wavevectors $k$ are determined using stationary wave-like fitting functions~\cite{Buchs_PRL} displayed with dashed red curves in Figs.~\ref{exp_data_Ar}(a)-(b) and ~\ref{exp_data_N}(a)). From this procedure, the potential step height and position can be estimated (see supplementary information). The experimental data sets $E(k)$ are plotted in the right panels of Figs.~\ref{exp_data_Ar}(a) and \ref{exp_data_N}(a) together with dispersion relations from a third-nearest neighbor tight-binding calculation closely approximating ab-initio results~\cite{Reich_TB_2002}. These chirality-dependent tight-binding dispersion relations, calculated within an extended Brillouin zone resulting from the defect-induced breaking of the translation invariance~\cite{Bercioux_prb_2011}, are used in the Hamiltonian of our one-dimensional model. Taking into account the measured chiral angle, diameter distribution~\cite{Buchs_conf} and measured bandgaps, we find the best match with chiralities $(7,6)$ for QD I and $(11,1)$ for QD II (see supplementary information).
\\
\indent Once chiralities together with potential step heights and positions are optimized, one can fit the height and width of the rectangular tunneling barriers in order to reproduce the experimental level spacings and general LDOS patterns. On a qualitative ground, a symmetric double barrier system results in the formation of spatially symmetric discrete bound states. Increasing both barrier heights simultaneously shifts the bound state energy levels and level spacings up. This leads to sharper bound states as the confinement in the QD is made stronger thus increasing the lifetime of the confined electrons. Increasing the barrier thickness with constant inner edge separation does not affect much the level spacings but further sharpens the bound states. Any asymmetry introduced by a change in the width or height of one single barrier leads to broader bound states. The presence of a potential step modifies the LDOS in lifting the levels of the bound states, with a more pronounced effect on the lower states. In QD I and II, the center of each barrier is aligned with the center of the gap states ($d3'$-$d4$ for QD I and $d6'$-$d7$ in QD II) and the width ratio is kept proportional to the ratio of the spatial extent of the gap states. Thus, by increasing the width of the barriers, we decrease the length of the QD leading to higher level spacings, and vice versa. The experimental level spacings can then be approximated by tuning both barrier widths in the same ratio and the heights individually, knowing that the scattering strength of $d3'$ ($d7$) is larger than $d4$ ($d6'$) according to the observed asymmetry in the LDOS described above \footnote{The transmission probability through a rectangular tunneling barrier is given by $T=\left( 1+\frac{V^{2}\sinh^{2}\left( a \cdot \sqrt{2m^{*}(V-E)}/\hbar \right)}{4E(V-E)} \right)^{-1}$, where $V$ and $a$ are respectively the barrier height and width. For the argument in the $\sinh$ sufficiently small such that $\sinh(x)\simeq x$, it can be shown that $a$ and $V$ can be coupled such that the transmission probability becomes a function of the area under the barrier $A=a\cdot V$, with $T=\left( 1+ \frac{m^{*}A^{2}}{2\hbar^{2}E} \right)^{-1}$. In our case, this condition is not satisfied and thus the barrier geometries are tuned empirically to fit the experimental level spacings.}.
\\
\indent For QD I, we find a good match in the conduction band for the barrier heights $V_\mathrm{d3'}=1$ eV and $V_\mathrm{d4}=0.85$ eV, widths $a_\mathrm{d3'}=a_\mathrm{d4}=$ 3.4 nm, and potential step $V_\mathrm{L}-V_\mathrm{R}=60$ meV. With these parameters, the spatial profile of the obtained quantized states (see lower subpanels in Fig.~\ref{exp_data_Ar}(a) and (c)) reproduces the experimental modulation features remarkably well. Also, the simulated LDOS displays a pattern with curved stripes oriented from top left to bottom right, as observed experimentally, due to a left barrier with a larger scattering strength. In the valence band, although modes m-2 and lower do not show a well defined structure in the spatial direction, thinner barriers with dimensions $a_\mathrm{d3'/d4}=2.5$ nm, $V_\mathrm{d3'/d4}=-0.4$ eV, leading to a slightly longer QD length (9.6 nm compared to 8.7 nm in the conduction band) can reproduce the measured level spacings very well.
\\
\indent For QD II, we observed that the measured energy levels are overestimated by a factor $\alpha\sim1.29$, presumably due to a voltage division effect induced by the impurity layer mentioned above (see details in supplementary information). We find a good agreement with the experimental LDOS with the parameters: $V_{d3'}=V_{d4}\simeq$ 0.47 eV, $a_\mathrm{d6'}=1.5$ nm, $a_\mathrm{d7}=2.6$ nm and $U_\mathrm{C}=V_\mathrm{L}-V_\mathrm{R}\simeq 47$ meV. Note that in Fig.~\ref{exp_data_N}(b) the barrier and potential heights are multiplied by $\alpha$ to allow a direct comparison with the experimental LDOS. The simulated LDOS shows a pattern with curved stripes oriented from bottom left to top right, as observed experimentally, due to a right barrier exhibiting a larger scattering strength. Also, the spatial profile of the obtained bound states (see lower subpanels in Fig.~\ref{exp_data_N}(a) and (b)) reproduces the experimental features quite well. Note also that one can distinguish an isolated state in the experimental LDOS at an energy level between m1 and m2, about in the middle of the QD. This state that prevented an accurate fit of the state m2 in the right QD part is attributed to a spatial feature visible in the STM topography image in Fig.~\ref{exp_data_Ar}(d) (see also supplementary information, Fig.S2(f)), probably a physisorbed impurity which does not affect the LDOS significantly.
\\
\subsection{Ab-initio calculations}
\begin{figure}
\includegraphics[width=16cm]{Figure_4.pdf}
\caption{\label{num_data} (a)-(c) LDOS ab-initio simulations of a semiconducting $(16,0)$ SWNT with combinations of vacancies defects separated by 11.1 nm. Subpanels display QD state linecut profiles. (d) Tight-binding (black curve) and ab-initio dispersion relations (green circles) for a pristine $(16,0)$ SWNT with $E_\mathrm{n}(k_\mathrm{n})$ data sets extracted from (a)-(c). (e)-(g) LDOS ab-initio simulations of a semiconducting $(17,0)$ SWNT with combinations of N ad-atoms and vacancies defects separated by 10.7 nm. (h) Tight-binding (black curve) and ab-initio dispersion relations (green circles) for a pristine $(17,0)$ SWNT with $E_\mathrm{n}(k_\mathrm{n})$ data sets extracted from (e)-(g).}
\end{figure}
In order to elucidate the physical nature of the electron/hole confining scattering centers, we performed ab-initio simulations based on a combination of density functional theory~\cite{pbe,paw,vasp_paw,VASP2}, maximally localized Wannier orbitals~\cite{transportwannier90} and Green's functions (see supplementary information). Without loss of generality, we have simulated short unit cell semiconducting zigzag SWNTs with different combinations of the most probable defect structures. Results for vacancy defects likely being induced by 200 eV Ar$^{+}$ ions, separated by about 11 nm in a $(16,0)$ SWNT are shown in Fig.~\ref{num_data}(a)-(c) with DV-DV, DV-SV and SV-SV pairs, respectively. The LDOS displays midgap states at the defect positions as expected as well as defect states in the valence band~\cite{Buchs_Ar}. Most importantly, clear quantized states with a number of maxima increasing with energy are observed between the defects in the conduction band, emphasizing the ability of SVs and DVs to confine carriers. For the asymmetric configuration DV-SV, one can distinguish faint curved stripe patterns oriented from top left to bottom right, indicating a larger scattering strength for DVs compared to SVs. This is consistent with observations in transport experiments~\cite{Gomez05nm}. On the other hand, the patterns in the valence band strongly depend on the defect types. Discrete states can be distinguished for the DV-DV case, with m-2 being mixed with defect states. For the DV-SV case, clear curved stripe patterns oriented from bottom left to top right indicate again a stronger scattering strength for DV. Also, broader states are observed, indicating that the scattering strength of DVs and SVs is weaker in the valence band compared to the conduction band.
\\
\indent More insight on the energy dependent scattering strength for each defect pair configuration can be obtained by extracting the wavevector $k_\mathrm{n}(E_\mathrm{n})$ for each resonant state. This data set is plotted in Fig.~\ref{num_data}(d) for the conduction and valence bands together with the $(16,0)$ dispersion relations calculated from the third-nearest neighbor TB model and from the ab-initio calculation for the pristine nanotube. A first observation is the excellent agreement between TB and ab-initio results, further validating the method used in Figs.~\ref{exp_data_Ar}(a)-(b) and ~\ref{exp_data_N}(a). The vertical dashed lines indicate the limiting $k_\mathrm{n,\infty}=\frac{\pi \cdot n}{L}$ values corresponding to the closed system (infinite hard walls potential) with $L=11.1$ nm being the defect-defect distance. In the conduction band, we find that $k_\mathrm{n}(E_\mathrm{n})=\frac{\pi \cdot n}{L_\mathrm{eff}(n)} < k_\mathrm{n,\infty}$, indicating that the effective lengths $L_\mathrm{eff}(n)$ of the QD are larger than $L$ ($i.e.$ the resonant states wavefunctions are characterized by penetrating evanescent modes inside the defect scattering potential), as expected for an open system. The shortest $L_\mathrm{eff}(n)$ are obtained for the DV-DV configuration with 12.1 nm (m1), 13.1 nm (m2) and 12.9 nm (m3), which we attribute to wider scattering potential profiles for DVs compared to SVs. In the valence band, we find that $k_\mathrm{n}(E_\mathrm{n})=\frac{\pi \cdot n}{L_\mathrm{eff}(n)} > k_\mathrm{n,\infty}$, with $L_\mathrm{eff}(n)$ values between 7.9 nm (DV-DV, m-1) and 9.66 nm (DV-SV, m-2). We attribute this pronounced QD shortening to wider scattering potential profiles of both DVs and SVs in the valence band, probably due to mixing with wide spread defect states in the valence band.
\\
\indent Ab-initio calculations for different defect pairs combinations containing at least one N ad-atom, $i.e.$ N-DV, N-SV and N-N, are presented in Fig.~\ref{num_data}(e)-(h) for a $(17,0)$ SWNT, along with details on the defects geometries. Remarkably, clear QD states are generated for all three configurations, underlining the potential of N ad-atoms to confine carriers in semiconducting SWNTs and thus to generate intrananotube QDs.
\\
\indent In order to demonstrate the scattering strengths of the different defects, we calculated the energy dependent conductance in addition to the LDOS for the different combinations of the QD defining scattering defects on the $(16,0)$ and $(17,0)$ SWNTs, see supplementary information. Generally we can observe strong conductance modulation of the order of 30-40\% with regard to the pristine CNT for all three tested defects (double vacancies DV, single vacancies SV and chemisorbed C-N) with the DVs having the largest scattering strength in the CB and VB.
\\
\indent Note that the choice of the zigzag SWNT chiralities in the two different ab-initio scenarios is motivated by the different effective masses of both chiralities ($m^{*}_{(17,0)}>m^{*}_{(16,0)}$) which is typical for chirality families $(3n-1,0)$ and $(3n-2,0)$~\cite{ZZ_families}. Taking advantage of recent reports on SWNT chirality control~\cite{chirality_control_EMPA,chirality_control_chinese,chirality_chemistry}, this property could be used in practice to design QDs with different level spacings for the same QD length. From an application point of view, however, QDs generated by DVs will have far superior stability at room temperature due to their high migration barrier above 5 eV ($\sim$~1 eV for single vacancy)~\cite{Kra06vm}. This value drops down by at least 2 eV for N ad-atoms depending on their chemisorption configuration~\cite{Nitrogen_prb_07,Yma05nitr}.
\\
\indent Our ab-initio simulations do not take into account any substrate effect. In the experimental case, the carriers can decay through the substrate, thus limiting their lifetime. This leads to state broadening, measured between about 60 meV up to 120 meV in QD I and II, while the quantized states widths in ab-initio simulations vary between about 5 meV and 45 meV. This suggests that a better contrast of the experimental quantized states, especially in the valence band, could be achieved by lowering the nanotubes-substrate interaction through $e.g.$ the insertion of atomically thin insulating NaCl films~\cite{Ruffieux_Nature_2016}. This would allow to gain more insight on the electronic structure of the QDs as well as in the associated scattering physics at the confining defects~\cite{Buchs_PRL}.
\section{Conclusions and outlook}
In summary, using low-temperature STM/STS measurements supported by an analytical model and ab-initio simulations, we have demonstrated that intrananotube quantum dots with confined electron and hole states characterized by energy level spacings well above thermal broadening at room temperature can be generated in semiconducting SWNTs by structural defects such as vacancies and di-vacancies, as well as nitrogen ad-atoms. These results, combined with recent progresses in type and spatial control in the formation of defects~\cite{Robertson_2012,Yoon_2016,Laser_writing_2017} as well as chirality control~\cite{tunable_QD_defects}, hold a high potential for applications in the design of SWNT based quantum devices. These include $e.g.$ electrically driven single-photon emitters operating at room temperature and telecom wavelength. In this context, the observation of quantum confinement effects in the emitted light of cut, sub-10 nm, semiconducting SWNTs~\cite{Dai_2008} shall be seen as an additional motivation for investigating the optical properties of our "QD with leads" building-blocks. These would include $e.g.$ studying optical transitions selection rules for different types and configurations of defect pairs~\cite{sel_rules_2006} associated with experimental studies such as photoluminescence~\cite{Lefebvre06} combined to $g^{(2)}$ correlation measurements~\cite{Hofmann_2013} in suspended SWNT devices as well as photocurrent imaging~\cite{Buchs_Nat_comm} and spectroscopy~\cite{Gabor_2009}.
\section*{Acknowledgements}
The authors thank Ethan Minot, Lee Aspitarte, Jhon Gonzalez, Andres Ayuela, Omjoti Dutta and Arkady Krasheninnikov for fruitful discussions.
The work of DB is supported by Spanish Ministerio de Econom\'ia y Competitividad (MINECO) through the project FIS2014-55987-P and by the (LTC) QuantumChemPhys. LM acknowledges support from the BMBF-project WireControl (FKZ16ES0294) and computing time for the supercomputers JUROPA and JURECA at the J\"ulich Supercomputer Centre (JSC).
\clearpage
\section*{References}
|
What experimental techniques were used to study the quantum dot structures in this research?
|
Low temperature scanning tunneling microscopy and spectroscopy (STM/STS).
| 4,297
|
multifieldqa_en
|
8k
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.