Large Language Model Based Solutions How To Deliver Value With Cost Effective Generative AI Applications 1st Edition Shreyas Subramanian
Large Language Model Based Solutions How To Deliver Value With Cost Effective Generative AI Applications 1st Edition Shreyas Subramanian
com
https://ebookname.com/product/large-language-model-based-
solutions-how-to-deliver-value-with-cost-effective-
generative-ai-applications-1st-edition-shreyas-subramanian/
OR CLICK BUTTON
DOWNLOAD EBOOK
https://ebookname.com/product/generative-ai-1st-edition-martin-
musiol/
https://ebookname.com/product/empowering-the-public-sector-with-
generative-ai-1st-edition-sanjeev-pulapaka/
https://ebookname.com/product/interfaceless-conscious-design-for-
spatial-computing-with-generative-ai-1st-edition-diana-olynick/
https://ebookname.com/product/metamaterials-with-negative-
parameters-theory-design-and-microwave-applications-wiley-series-
in-microwave-and-optical-engineering-1st-edition-ricardo-marques/
Handbook of Microdialysis Volume 16 Methods
Applications and Perspectives Handbook of Behavioral
Neuroscience 1st Edition Ben Hc Westerink
https://ebookname.com/product/handbook-of-microdialysis-
volume-16-methods-applications-and-perspectives-handbook-of-
behavioral-neuroscience-1st-edition-ben-hc-westerink/
https://ebookname.com/product/ordinal-measurement-in-the-
behavioral-sciences-1st-edition-norman-cliff/
https://ebookname.com/product/dvoretsky-s-endgame-manual-fifth-
edition-mark-dvoretsky-revised-by-karsten-muller/
https://ebookname.com/product/post-communist-democratisation-in-
lithuania-elites-parties-and-youth-political-
organisations-1988-2001-1st-edition-diana-janusauskiene/
https://ebookname.com/product/the-shadow-of-the-object-
psychoanalysis-of-the-unthought-known-1st-edition-christopher-
bollas/
Posted Work in the European Union The Political Economy
of Free Movement 1st Edition Jens Arnholtz (Editor)
https://ebookname.com/product/posted-work-in-the-european-union-
the-political-economy-of-free-movement-1st-edition-jens-arnholtz-
editor/
CONTENTS AT A GLANCE
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
CHAPTER 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
CHAPTER 2 Tuning Techniques for Cost Optimization. . . . . . . . . . . . . . . . . . . . . . 29
CHAPTER 3 Inference Techniques for Cost Optimization . . . . . . . . . . . . . . . . . . . 49
CHAPTER 4 Model Selection and Alternatives. . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
CHAPTER 5 Infrastructure and Deployment Tuning Strategies. . . . . . . . . . . . . . 123
CONCLUSION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Large Language Model–Based Solutions
Large Language Model–Based Solutions
HOW TO DELIVER VALUE WITH COST-EFFECTIVE
GENERATIVE AI APPLICATIONS
Shreyas Subramanian
Copyright © 2024 by John Wiley & Sons Inc. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of
the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through
payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923,
(978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission
should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201)
748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its
affiliates in the United States and other countries and may not be used without written permission. All other trademarks are
the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in
this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this
book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book
and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be
created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not
be suitable for your situation. You should consult with a professional where appropriate. Further, readers should be aware
that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not
limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care
Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
If you believe you’ve found a mistake in this book, please bring it to our attention by emailing our reader support team at
[email protected] with the subject line “Possible Book Errata Submission.”
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in
electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Dr. Shreyas Subramanian has been at the forefront of driving revolutionary advancements in machine
learning (ML) and artificial intelligence that resonate with businesses and researchers alike. With a PhD in
aerospace engineering from Purdue University, Dr. Subramanian currently serves as a principal data scientist at
Amazon, a position held by few people worldwide. His prolific research record includes 26 academic papers
and six patents, with significant citations to date. His two previous books in the field of AI have sold thousands
of copies, with his latest book, Applied Machine Learning and High-Performance Computing, being one of the
top 50 books covering AI sold on Amazon and one of the only books bridging the gap between HPC and AI. His
earlier AWS AI certification guide was ranked the top 9th bestseller in the AI category worldwide.
With a rich and extensive career, Dr. Subramanian has championed the development and application of AI/
ML models while carving a distinct leadership path within Amazon. His achievements range from implementing
AI/ML solutions for use cases in core verticals, including manufacturing, aerospace, automotive, financial services,
and healthcare, to fundamental Artificial Intelligence research. Particularly noteworthy is his role as the creator of
the open-source ML package ezsmdeploy, which simplifies the deployment of models on the cloud to a single-line
API call and has garnered more than 50,000 downloads so far. Most recently, Dr. Subramanian has been involved
in helping train generative large language models like ChatGPT for customers of Amazon Web Services in a
cost-efficient way. This speaks volumes about his influence in democratizing ML and fostering a community of
practitioners.
Dr. Subramanian’s PhD dissertation focused on developing algorithms for complex aerospace systems
design problems. Since then, he has published several seminal papers on topics such as evolutionary algorithms,
surrogate modeling, distributed optimization, deep learning, and language modeling. Dr. Subramanian’s compre-
hensive expertise extends to academia and industry, where he has served as a reviewer for prominent journals
and conferences, contributing to the academic community. Recently, Dr. Subramanian won the Best Presentation
Award at the Pattern Recognition and Machine Learning 2023 conference for his work on a novel scheduler for
faster language model training. He has also been an invited judge and session chair for major conferences such as
IEEE, INFORMS, and AIAA.
Dr. Subramanian’s research has attracted significant interest from government funding agencies. He was
invited to serve on five NSF review panels on artificial intelligence to evaluate proposals worth up to $2 million
in Small Business Innovation Research grants for startups and small businesses. One of Dr. Subramanian’s signifi-
cant contributions lies in his ability to secure funding for pioneering projects in topics related to applied machine
learning. His skill in proposal writing secured more than $4.6 million in funding from NASA while he was the
director of research at a NASA subcontractor, where he helped identify and solve problems related to aviation
safety using AI/ML tools on the cloud. Dr. Subramanian exemplifies leadership in the AI research community with
elite academic credentials and impactful real-world contributions. He was recently nominated and selected to be
an IEEE senior member, a distinction held by only 8% of IEEE’s 400,000+ members worldwide.
In his current role as a principal data scientist at Amazon, Dr. Subramanian’s contributions have led to sub-
stantial cost savings for numerous businesses. His efforts in architecting, building, and scaling large ML models
have resulted in remarkable annual savings of hundreds of thousands of dollars for clients. Moreover, his guid-
ance has led to the success of end-to-end advanced driver assistance systems (ADASs) and self-driving car pro-
jects, underpinning the vital intersection of ML and automotive technology, which is currently considered a key
milestone in the field of AI. At Amazon, Dr. Subramanian leads a team of machine learning solutions architects
and researchers across several projects. Internally at Amazon, several of his ideas have been incorporated into
new product features for Amazon’s machine learning services. By identifying areas of cost optimization within
machine learning operations, Dr. Subramanian has collectively saved millions of dollars for clients. For example,
he reduced production costs by 8% per quarter for one of the world’s largest contract manufacturers, saving
millions of dollars. In another instance, Dr. Subramanian reduced the cost of tuning a large number of models
for a customer by more than 99%, from hundreds of thousands of dollars per year to just dozens.
About the Author
This extreme interest in applying cost optimization principles to “do more with less” has led to this book on
optimizing performance with cost in the era of large language models.
Dr. Subramanian continues publishing cutting-edge papers in the field of AI, filing high-value patents, writing
books with a unique viewpoint, and speaking at major AI conferences.
x
ABOUT THE TECHNICAL EDITOR
Rabi Jay is a renowned expert in digital transformation and enterprise AI, boasting more than 15 years of rich
experience in guiding businesses through the complexities of technology-driven change. His expertise encom-
passes a wide range of areas, including AI-driven martech innovation, platform modernization, enterprise asset
consolidation, and efficiency enhancement through automated workflows. Jay’s proficiency is further reinforced
by an impressive array of certifications spanning AWS, Azure, SAP, ITIL, TOGAF, and SAFe Agile, demonstrating
his comprehensive understanding of both the technical and strategic aspects of digital transformation.
Beyond his technical acumen, Jay has demonstrated exceptional leadership and global strategic insight as a
global alliance manager with Deloitte. He skillfully leads large-scale, multinational projects across diverse sectors
such as retail, food, consumer products, aerospace, and software technology. As a VP of digital transformation,
he championed an integrated practice using human-centered design, AI platforms, and change management built
upon the principles of design thinking and process reengineering. An accomplished author and speaker, Jay has
contributed significantly to the thought leadership on AI and cloud technologies, with notable books including
SAP NetWeaver Portal Technology: The Complete Reference (McGraw-Hill, 2008) and Enterprise AI in the
Cloud: A Practical Guide to Deploying End-to-End Machine Learning and ChatGPT Solutions (Wiley, 2024).
His LinkedIn newsletter, “Enterprise AI Transformation: Playbook for Professionals and Businesses to Implement
AI,” is a testament to his passion for sharing knowledge and best practices in generative AI, cloud adoption, and
AI implementation. Outside his professional pursuits, Jay is an avid traveler, golfer, ping-pong enthusiast, and
dedicated self-development coach with a keen interest in yoga and meditation.
CONTENTS
INTRODUCTION xix
CHAPTER 1: INTRODUCTION 1
xiv
Contents
xv
Contents
xvi
Contents
CONCLUSION 163
INDEX 181
xvii
Introduction
FIGURE 1: Google Trends chart of interest over time for the term Generative AI worldwide
Introduction
Generative AI (GenAI) and LLMs represent two interlinked domains within artificial intelligence, both focusing
on content generation but from slightly different angles. GenAI encompasses a broader category of AI technolo-
gies aimed at creating original content. While LLMs excel at text processing and production, GenAI places a
broader emphasis on creativity and content generation across different mediums. Understanding the distinctions
and potential synergies between these two areas is crucial to fully harness the benefits of AI in various applica-
tions, ranging from automated customer service and content creation to more complex tasks such as code genera-
tion and debugging. This field has seen rapid advancements, enabling enterprises to automate intelligence across
multiple domains and significantly accelerate innovation in AI development. On the other hand, LLMs, being a
subset of GenAI, are specialized in processing and generating text. They have demonstrated remarkable capabili-
ties, notably in natural language processing tasks and beyond, with a substantial influx of research contributions
propelling their success.
The proliferation of LLMs and GenAI applications has been fueled by both competitive advancements and col-
laborative efforts within the AI community, with various stakeholders including tech giants, academic institutions,
and individual researchers contributing to the rapid progress witnessed in recent years. In the following sections,
we will talk about the importance of cost optimization in this era of LLMs, explore a few case studies of success-
ful companies in this area, and describe the scope of the rest of the book.
xx
Introduction
Conducting a thorough cost-value assessment to rank and prioritize GenAI implementations based on potential
impact, cost, and complexity can lead to better financial management and realization of ROI in GenAI initiatives.
Lastly, the most common pattern seen today is for “model providers” to spend and try to recoup their costs by
providing an API and for “model consumers” to heavily optimize their costs by using GenAI model APIs without
the need for any up-front investment or even data.
Computational demands: LLMs like GPT-3 or BERT require substantial computational resources for training
and inference. The high computational demands translate to increased operational costs and energy consumption,
which may create barriers, especially for small to medium-sized enterprises (SMEs) with limited resources.
Opportunity: The challenge of computational demands opens the door for innovation in developing more effi-
cient algorithms, hardware accelerators, and cloud-based solutions that can reduce the cost and energy footprint
of operating LLMs.
Model complexity: The complexity of LLMs, both in terms of architecture and the amount of training data
required, presents challenges in achieving cost optimization. The model’s size often correlates with its perfor-
mance, with larger models generally delivering better results at the expense of increased costs.
Opportunity: This challenge catalyzes the exploration and adoption of techniques such as model prun-
ing, quantization, and knowledge distillation that aim to reduce model size while retaining or even enhancing
performance.
Data privacy and security: Handling sensitive data securely is a paramount concern, especially in sectors such as
healthcare and finance. The cost of ensuring data privacy and security while training and deploying LLMs can be
significant.
Opportunity: The necessity for robust data privacy and security solutions fosters innovation in privacy-pre-
serving techniques, such as federated learning, differential privacy, and encrypted computation.
Scalability: Scaling GenAI applications to accommodate growing data and user demands without a proportional
increase in costs is a formidable challenge.
Opportunity: This challenge drives the advancement of scalable architectures and technologies that allow for
efficient scaling, such as microservices, container orchestration, and serverless computing.
Model generalizability and domain adaptation: Achieving high performance on domain-specific tasks often
requires fine-tuning LLMs with additional data, which can be cost-intensive.
Opportunity: This creates a niche for developing techniques and frameworks that facilitate efficient domain
adaptation and transfer learning, enabling cost-effective customization of LLMs for various domain-specific
applications.
Evolving regulatory landscape: The regulatory landscape surrounding AI and data usage is continually evolving,
potentially incurring compliance costs.
Opportunity: The dynamic regulatory environment stimulates the development of adaptable AI systems and
compliance monitoring tools that can mitigate the risks and costs associated with regulatory compliance.
Each of these challenges, while posing hurdles, concurrently lays the groundwork for innovation and advance-
ments that can significantly contribute to cost optimization in GenAI applications with large foundational mod-
els. The confluence of these challenges is an important factor in propelling the field of GenAI forward, fostering
the development of cost-effective, efficient, and robust GenAI packages, software, and solutions. The myriad
of factors contributing to the high costs in the development, deployment, and operation of GenAI and LLMs
xxi
Introduction
necessitates a structured approach toward cost optimization to ensure the sustainable adoption and scalability of
these transformative technologies. This book dives into the details of what makes GenAI applications powerful
but costly and highlights several aspects of balancing performance with cost to ensure the success of organiza-
tions that make use of large foundational models. Next, we will look at a few case studies as motivation for the
rest of the book.
xxii
Introduction
xxiii
Introduction
xxiv
Introduction
AI practitioners: For those immersed in the day-to-day endeavor of building, tuning, and deploying AI models,
this book offers a collection of strategies and techniques for cost optimization, helping to maximize the value and
impact of your work while minimizing expenditure.
Researchers: Academics and researchers delving into the frontiers of GenAI and large language models will find
a structured discourse on the economic aspects that underpin the practical deployment of research findings. This
book aims to bridge the chasm between academic exploration and real-world application, shedding light on cost-
effectiveness as a critical vector.
Engineers: Engineers standing at the confluence of software, hardware, and AI will discover a wealth of knowl-
edge on how to architect, implement, and optimize systems for cost efficiency while harnessing the potential of
large language models.
Educators and students: Educators aiming to equip students with a holistic understanding of GenAI will find this
book a valuable resource. Similarly, students aspiring to delve into this exciting domain will garner a pragmatic
understanding of the cost dynamics involved.
Tech enthusiasts: If you are captivated by the unfolding narrative of AI and its potential to shape the future, this
book offers a lens through which you can appreciate the economic dimensions that are integral to making this
promise a reality.
Policy makers: Those engaged in shaping the policy framework around AI and data utilization will find insightful
discussions on the cost considerations that are imperative for fostering a sustainable and inclusive AI ecosystem.
Decision-makers: For decision-makers steering the strategic direction of organizations, this book provides a lucid
understanding of the economic landscape of GenAI applications. It elucidates the cost implications, risks, and
opportunities that accompany the journey toward leveraging GenAI for business advantage.
In essence, this book caters to a large and diverse readership, aiming to engender a nuanced understanding of cost
optimization in the realm of GenAI and large language models. Through a blend of technical exposition, real-
world case studies, and strategic insights, it seeks to foster an informed dialogue and pragmatic action toward
cost-effective and responsible AI deployment.
SUMMARY
This chapter introduced the world of GenAI and LLMs and highlighted the importance of cost optimization.
It presented three micro case studies to help you further understand what it takes for even large, well-funded
organizations to achieve scale while controlling costs.
xxv
Large Language Model–Based Solutions
1
Introduction
WHAT’S IN THIS CHAPTER?
interconnected nodes that transmit signals between input and output. By training on huge amounts of data, these
multilayered networks could potentially learn representations and patterns too complex for humans to hard-code
using rules.
In the 2010s, deep learning finally enabled a breakthrough in AI capabilities. With sufficient data and comput-
ing power, deep neural networks achieved remarkable accuracy in perception tasks such as image classification
and speech recognition. However, these systems were narrow in scope, focused on pattern recognition in specific
domains. Another challenge was that they required massive labeled datasets for supervised training. Obtaining
such rich annotation at scale for complex cognitive tasks proved infeasible.
This is where self-supervised generative modeling opened new possibilities. By training massive neural network
models to generate representations from unlabeled data itself, systems could learn powerful feature representa-
tions. Self-supervised learning could scale more easily by utilizing the abundant digital data available on the
Internet and elsewhere. Language modeling emerged as a promising approach, where neural networks are trained
to predict the next word in a sequence of text.
zero-shot, few-shot learning capabilities never observed before, stunning the AI community. Without any gradient
updates or fine-tuning, GPT-3 could perform NLP tasks from just task descriptions and a few examples. As such,
GPT-3 highlighted the power of scale in language models. Its surprising effectiveness motivated intense research
interest in training even larger models. This led to the exploration of LLMs with model parameters in the trillion+
range. Startups such as Anthropic and public efforts such as PaLM, Gopher, and LLaMA pushed model scale
drastically with significant investments in the space. Several tech companies and startups are now using (and
training their own) LLMs with hundreds of billions or even a trillion plus parameters. Models like PaLM, Flan,
LaMDA, and LLaMA have demonstrated the scalability of language modeling objectives using the transformer
architecture. At the time of this writing, Anthropic has developed Claude, the first LLM to be openly released
with conversational abilities rivaling GPT-3.
You can see that all the models mentioned are related, much like the Tree of Life. In other words, anatomical
similarities and differences in a phylogenetic tree are similar to the architectural similarities found in language
models. For example, Figure 1.1 shows the evolutionary tree of LLMs and highlights some of the most popular
models used in production so far. The models that belong to the same branch are more closely related, and the
vertical position of each model on the timeline indicates when it was released. The transformer models are repre-
sented by colors other than gray: decoder-only models like GPT, OPT and their derivatives, encoder-only models
like BERT, and the encoder-decoder models T5 and Switch are shown in separate main branches. As mentioned
earlier, models have successively “grown” larger. Interestingly, this is visually and objectively similar to the evolu-
tion of intelligent species, as shown in Figure 1.2. A deeper comparison is out of the scope of this book, but for
more information on either of these evolutionary trees, refer to the links in the captions.
Increasing the model size, compute, and data seems to unlock new abilities in LLMs, which exhibit impressive
performance on question answering, reasoning, and text generation with simple prompting techniques. By train-
ing LLMs to generate code, models such as AlphaCode and Codex display proficient coding skills. LLMs can
chat, translate, summarize, and even write mathematical proofs aided by suitable prompting strategies.
The key shift from PLMs to LLMs is that scale seems to bring about qualitative transitions beyond just incremen-
tal improvements. LLMs display certain emergent capabilities such as few-shot learning, chain of reasoning, and
instruction following not observed in smaller models. These abilities emerge suddenly once model scale crosses a
sufficient threshold, defying smooth scaling trends.
LLMs entail a paradigm shift in AI from narrowly specialized systems to versatile, general-purpose models. Lead-
ing experts feel recent LLMs display signs of approaching human-level artificial general intelligence. From sta-
tistical to neural networks, the steady progress in language modeling scaled up by orders of magnitude has been
the missing link enabling this rapid advancement toward more human-like flexible intelligence. The astounding
capabilities of GPT-3 highlighted the power of scale in language models. This has led to intense research interest
in developing even larger LLMs with model parameters in the trillion range. The assumption is that bigger is bet-
ter when it comes to language AI. Scaling model size along with compute and data seems to unlock new abilities
and performance improvements.
Overview of GenAI Applications and Large Language Models ❘ 5
The largest LLMs have shown the ability to perform human-level question answering and reasoning in many
domains without any fine-tuning. With proper prompting techniques like chain of thought, they can solve com-
plex arithmetic, logical, and symbolic reasoning problems. LLMs can intelligently manipulate symbols, numbers,
concepts, and perform multistep inferences when presented with the right examples.
But of course, language generation is the main area where LLMs’ capabilities have taken a huge leap. LLMs
can generate fluent, coherent, and human-like text spanning news articles, poetry, dialogue, code, mathematical
proofs, and more. The creativity and versatility displayed in conditional and unconditioned text generation are
remarkable. Few-shot prompting allows controlling attributes such as length, style, content, etc. Text-to-image
generation has also made rapid progress leveraging LLMs. The exponential growth in model parameters has
been matched by the computing power and datasets availability. Modern GPU clusters, the emergence of model
parallelism techniques, and optimized software libraries have enabled training LLMs with trillions of parameters.
Massive text corpora for pre-training are sourced from the Internet and digitization initiatives.
All this has fueled tremendous excitement and optimism about the future of AI. LLMs display a form of algo-
rithmic and statistical intelligence to solve many problems automatically given the right data. Leading AI experts
believe rapid recent progress is bringing us closer to artificial general intelligence than before. Large language
models may be the missing piece that enables machines to learn concepts, infer chains of reasoning, and solve
problems by formulating algorithms like humans.
LLMs still have major limitations. They are expensive and difficult to put into production, prone to hallucina-
tion, lack common sense, and struggle with complex symbolic reasoning. Model capabilities are also severely
constrained by the training data distribution. LLMs can propagate harmful biases, generate toxic outputs, and
be manipulated in dangerous ways. There are rising concerns around AI ethics, governance, and risks that merit
careful consideration. Responsible development of AI aligned with human values is necessary. However, we
already see several generative AI (GenAI) applications with these LLMs at their core! GenAI heralds a paradigm
shift from narrow analytical intelligence toward creative and versatile systems. GenAI applications powered by
models such as GPT-3, PaLM, and Claude are displaying remarkable abilities previously thought impossible
for machines.
Conversational agents and chatbots: One of the most popular applications of LLMs is conversational agents and
chatbots. Systems like Anthropic’s Claude and Google’s LaMDA leverage the language generation skills of LLMs
to conduct natural conversations. They can answer questions, offer advice, and discuss open-ended topics through
Another Random Document on
Scribd Without Any Related Topics
CHARITY UNDERTOOK TO DIVIDE EVERYTHING WITH EQUAL
FAIRNESS.
But when they took to their raft again, their sharp eyes
spied a fisherman some distance away with his line across
the stream.
"Now you'll hear him swear," said Charlie, with a
delighted chuckle. "I know what fishermen are like. I've
passed them before."
The young man had stopped the raft with his foot.
Charlie was rather exhausted with his efforts, and the little
girls were panting for breath.
"If I'm a pirate, I beg to tell you that this water is mine,
and that you are my prisoners. You'll land at once, and
forfeit your ship."
And then Charlie saw that he had brought his raft along
the wrong bend of the stream. He had seldom before come
as far as this.
The young man went into the hut, and soon appeared
with a pitchfork and two spades of very small proportions.
"And now," said Charity, "where is Security? It must be
the hut, of course!"
The young man dug too, but he presently said, "I'll give
another clue:"
"I have it. A singer's home is a bird's tree, and the base
is the trunk of it. There's only one tree which it can mean,
and it's the ash tree between Bolt and Ben."
It was hard to open, but at last the lid gave way, and
then Charlie very carefully lifted out the contents.
"A guinea for Bolt and Ben," he said. "I being Captain
keep the blue diamond and a guinea, my wife can have the
other."
Then the Pirate pulled out his pipe, and sitting cross
legged on the ground told them the most wonderful story of
how the treasures had been obtained and hidden away. The
children listened breathlessly, but at last the Captain said it
was getting late, and they must go. The Pirate took them
back to their raft, and then he surprised them again. He got
out a very small boat from under the willows, tucked
himself into it, and fastening the rope of the raft to his
painter, rowed gaily off down the stream, towing the
children back to the spot where he found them. Then he
bade them good-bye.
"Who do you think the Pirate can be?" Faith asked. "He
has such nice kind eyes, but a very grave face."
Granny said she could not possibly tell, and Aunt Alice
could not help them.
But the next day the rector's wife paid a long call.
Charity happened to be in the room, and though she was as
quiet as a little mouse she kept her ears wide open, and
when she was alone with Hope and Faith she was quite
excited.
"Charles, Captain of
the 'Success.'"
"I wonder how strong my soul is," said Faith. "I'm not
very strong in my body, Granny says. Can I make my soul
strong, Timothy?"
"'Tis just a figure o' speech. 'Tis your little soul you
want to be made strong."
"I like the Comforter very much," said Faith softly and
reverently; "He came and comforted me this morning,
Timothy. Aunt Alice scolded me because she told us not to
leave the front door open, there was such a wind. And Hope
left it open; she came out last, and the wind knocked a
china vase off the table, and broke it, and Aunt Alice was
very angry and scolded me, because she thought I'd gone
out last. And I went away and cried, and then I distinctly
felt the Comforter near me, and I asked Him to comfort me.
I almost felt He took me in His Arms. He was so close. And I
kept quite still, and then I couldn't be sorry any more, for I
knew He knew I hadn't done it. He was so kind!"
"I feel when I'm talking to you," she said as she bade
the old man farewell, "that I'm getting happier every
minute. I shan't be able to come and see you so often when
we do lessons, but I'll come whenever I can!"
CHAPTER VI
CHARLIE STILL IN COMMAND
MISS VALE arrived on Monday, and the little girls fell in
love with her. She was very pretty, with bright, dark eyes,
and a quick, cheerful manner. But they found she was very
firm and strict in some things, and lessons could not be
trifled with.
"I shall not give you any lessons you cannot prepare,
and when I come I expect to find them done. If they are
not, I shall conclude it is idleness that is the cause and will
deal with it accordingly."
"My boy did too much with you that day. He has been in
bed ever since. His father says there must be no voyages
down the stream for a long time to come, so don't
encourage him to talk about it."
"'It wasn't any odds about the body, it's the soul that
really matters.'"
She added:
"No, but you are girls, and you can run about, and climb
trees. I've had a miserable time shut up here all alone."
"If you can't see Him, you can hear Him," said Faith
gravely. "There are lots of real things we can't see. The
wind—"
"Am I?"
"(Signed) CAPTAIN
CHARLES."
"What fun!" cried Hope. "You and me, Charity, will have
to be busy!"
"Your sooperior
WIFE."
"MY DEAR CAPTAIN,
"Yours as signed,
BOLT."
The very next day, Charity and Hope set off on their
different quests. Faith wandered out alone. She called at
Timothy's cottage, but he was out; and then she rambled
on through some fields, and made her way along a strange
lane, which had banks of primroses on each side. Presently
she saw a hole through the hedge; she crept through, and
then she started, for two men were talking together. They
were standing by a rick of hay, and she heard one man say
with passion in his tone:
"I tell you, Fielding, I'm so dead sick of rotting here like
a vegetable, when I might be up and doing with the rest of
the working world, that at times I feel inclined to make a
bolt for it!"
"It's hard lines," murmured the other man, and then the
two separated.
"I'm so glad you will speak to me," she said, "I should
have been so disappointed, if you hadn't."