最新章節
- Summary
- Example assert
- Investigating asserts
- Using the Cephs object store tool
- Recovering from a complete monitor failure
- Lost objects and inactive PGs
品牌:中圖公司
上架時間:2021-07-09 18:20:22
出版社:Packt Publishing
本書數字版權由中圖公司提供,并由其授權上海閱文信息技術有限公司制作發行
- Summary 更新時間:2021-07-09 19:55:46
- Example assert
- Investigating asserts
- Using the Cephs object store tool
- Recovering from a complete monitor failure
- Lost objects and inactive PGs
- RBD recovery
- Performing RBD failover
- Configuring RBD mirroring
- The rbd-mirror daemon
- The journal
- RBD mirroring
- What can cause an outage or data loss?
- Avoiding data loss
- What is a disaster?
- Disaster Recovery
- Summary
- Large monitor databases
- Investigating PGs in a down state
- Slow OSDs
- Failing disks
- Jumbo frames
- Flapping OSDs
- Extremely slow performance or no IO
- Diagnostics
- atop
- htop
- iostat
- Monitoring
- Hardware or driver issues
- Snaptrimming
- Scrubbing
- Recovery and backfilling
- Down OSDs
- Increased client workload
- Causes
- Slow performance
- Ceph logging
- Full OSDs
- Repairing inconsistent objects
- Troubleshooting
- Summary
- PG distributions
- ReadAhead
- Queue Depth
- Kernel RBD
- General system tuning
- The Network
- OP priorities
- Scrubbing
- PG Splitting
- filestore_queue_max_delay_multiple
- filestore_queue_high_delay_multiple
- filestore_expected_throughput_ops
- filestore_queue_high_threshhold
- filestore_queue_low_threshhold
- Filestore queue throttling
- WBThrottle and/or nr_requests
- VFS cache pressure
- Filestore
- CPU
- Recommended tunings
- RBD benchmarking
- RADOS benchmarking
- Disk benchmarking
- Network benchmarking
- iPerf
- Ping
- Sysbench
- Fio
- Benchmarking tools
- Benchmarking
- Latency
- Tuning Ceph
- Summary
- Alternative caching mechanisms
- Tiering with erasure-coded pools
- Monitoring parameters
- Promotion throttling
- Promotions
- Flushing and eviction
- Tuning tiering
- Creating tiers in Ceph
- Uses cases
- Read-proxy
- Proxy
- Read-forward
- Forward
- Writeback
- Tiering modes
- What is a bloom filter
- How Cephs tiering functionality works
- Tiering versus caching
- Tiering with Ceph
- Summary
- Custom Ceph collectd plugins
- Average latency
- Cluster capacity and usage
- Total MBps across all OSDs
- Total number of IOPs across all OSDs
- Showing most deviant OSD usage
- Number of Up and In OSDs
- Sample Graphite queries for Ceph
- Deploying collectd with Ansible
- collectd
- Grafana
- Graphite
- Monitoring Ceph with collectd
- The backfill_toofull state
- The down state
- The incomplete state
- The ugly
- Remapped
- The degraded state
- The backfilling backfill_wait recovering recovery_wait states
- The inconsistent state
- The bad
- Scrubbing and deep scrubbing
- The clean state
- The active state
- The good
- PG states - the good the bad and the ugly
- Performance counters
- Network
- Smart stats
- Operating system and hardware
- Ceph health
- What should be monitored
- Why it is important to monitor Ceph
- Monitoring Ceph
- Summary
- RADOS class caveats
- Testing
- Calculating MD5 on the OSD via RADOS class
- Calculating MD5 on the client
- Client librados applications
- RADOS class
- Preparing the build environment
- Writing a RADOS class that simulates distributed computing
- Writing a simple RADOS class in Lua
- Example applications and the benefits of using RADOS classes
- Distributed Computation with Ceph RADOS Classes
- Summary
- Example of the librados application that uses watchers and notifiers
- Example of the librados application with atomic operations
- Example librados application
- How to use librados?
- What is librados?
- Developing with Librados
- Summary
- Reproducing the problem
- Troubleshooting the 2147483647 error
- Demonstration
- Overwrites on erasure code pools with Kraken
- Creating an erasure-coded pool
- Where can I use erasure coding?
- SHEC
- LRC
- ISA
- Jerasure
- Algorithms and profiles
- How does erasure coding work in Ceph?
- K+M
- What is erasure coding?
- Erasure Coding for Better Storage Efficiency
- Summary
- Upgrading an OSD in your test cluster
- How to use BlueStore
- BlueFS
- Deferred writes
- RocksDB
- How BlueStore works
- Why is BlueStore the solution?
- Filestore limitations
- Ceph's requirements
- Why was it needed?
- What is BlueStore?
- BlueStore
- Summary
- Change and configuration management
- Deploying a test cluster with Ansible
- Adding the Ceph Ansible modules
- A very simple playbook
- Testing
- Variables
- Creating your inventory file
- Installing Ansible
- Ansible
- Orchestration
- The ceph-deploy tool
- Setting up Vagrant
- Obtaining and installing VirtualBox
- System requirements
- Preparing your environment with Vagrant and VirtualBox
- Deploying Ceph
- Summary
- Creating a backup and recovery plan
- Defining a change management process
- Following best practices to deploy your cluster
- Running PoC to determine if Ceph has met the requirements
- Training yourself and your team to use Ceph
- Choosing your hardware
- Defining goals so that you can gauge if the project is a success
- Understanding your requirements and how it relates to Ceph
- How to plan a successful Ceph implementation
- Power supplies
- Price
- Failure domains
- OSD node sizes
- Network design
- 10G networking requirement
- Networking
- Disks
- CPU
- Memory
- Enterprise - write intensive
- Enterprise - general usage
- Enterprise - read intensive
- Enterprise SSDs
- Prosumer
- Consumer
- SSDs
- Infrastructure design
- Distributed filesystem - SMB file server replacement
- Distributed filesystem - web farm
- Object storage with custom application
- Object storage
- Large bulk block storage
- OpenStack- or KVM-based virtualization
- Specific use cases
- The use of commodity hardware
- Reliability
- Performance
- Replacing your storage array with Ceph
- Ceph use cases
- How Ceph works?
- What is Ceph?
- Planning for Ceph
- Questions
- Piracy
- Errata
- Downloading the color images of this book
- Downloading the example code
- Customer support
- Reader feedback
- Conventions
- Who this book is for
- What you need for this book
- What this book covers
- Preface
- Customer Feedback
- www.PacktPub.com
- About the Reviewer
- About the Author
- Credits
- Title Page
- coverpage
- coverpage
- Title Page
- Credits
- About the Author
- About the Reviewer
- www.PacktPub.com
- Customer Feedback
- Preface
- What this book covers
- What you need for this book
- Who this book is for
- Conventions
- Reader feedback
- Customer support
- Downloading the example code
- Downloading the color images of this book
- Errata
- Piracy
- Questions
- Planning for Ceph
- What is Ceph?
- How Ceph works?
- Ceph use cases
- Replacing your storage array with Ceph
- Performance
- Reliability
- The use of commodity hardware
- Specific use cases
- OpenStack- or KVM-based virtualization
- Large bulk block storage
- Object storage
- Object storage with custom application
- Distributed filesystem - web farm
- Distributed filesystem - SMB file server replacement
- Infrastructure design
- SSDs
- Consumer
- Prosumer
- Enterprise SSDs
- Enterprise - read intensive
- Enterprise - general usage
- Enterprise - write intensive
- Memory
- CPU
- Disks
- Networking
- 10G networking requirement
- Network design
- OSD node sizes
- Failure domains
- Price
- Power supplies
- How to plan a successful Ceph implementation
- Understanding your requirements and how it relates to Ceph
- Defining goals so that you can gauge if the project is a success
- Choosing your hardware
- Training yourself and your team to use Ceph
- Running PoC to determine if Ceph has met the requirements
- Following best practices to deploy your cluster
- Defining a change management process
- Creating a backup and recovery plan
- Summary
- Deploying Ceph
- Preparing your environment with Vagrant and VirtualBox
- System requirements
- Obtaining and installing VirtualBox
- Setting up Vagrant
- The ceph-deploy tool
- Orchestration
- Ansible
- Installing Ansible
- Creating your inventory file
- Variables
- Testing
- A very simple playbook
- Adding the Ceph Ansible modules
- Deploying a test cluster with Ansible
- Change and configuration management
- Summary
- BlueStore
- What is BlueStore?
- Why was it needed?
- Ceph's requirements
- Filestore limitations
- Why is BlueStore the solution?
- How BlueStore works
- RocksDB
- Deferred writes
- BlueFS
- How to use BlueStore
- Upgrading an OSD in your test cluster
- Summary
- Erasure Coding for Better Storage Efficiency
- What is erasure coding?
- K+M
- How does erasure coding work in Ceph?
- Algorithms and profiles
- Jerasure
- ISA
- LRC
- SHEC
- Where can I use erasure coding?
- Creating an erasure-coded pool
- Overwrites on erasure code pools with Kraken
- Demonstration
- Troubleshooting the 2147483647 error
- Reproducing the problem
- Summary
- Developing with Librados
- What is librados?
- How to use librados?
- Example librados application
- Example of the librados application with atomic operations
- Example of the librados application that uses watchers and notifiers
- Summary
- Distributed Computation with Ceph RADOS Classes
- Example applications and the benefits of using RADOS classes
- Writing a simple RADOS class in Lua
- Writing a RADOS class that simulates distributed computing
- Preparing the build environment
- RADOS class
- Client librados applications
- Calculating MD5 on the client
- Calculating MD5 on the OSD via RADOS class
- Testing
- RADOS class caveats
- Summary
- Monitoring Ceph
- Why it is important to monitor Ceph
- What should be monitored
- Ceph health
- Operating system and hardware
- Smart stats
- Network
- Performance counters
- PG states - the good the bad and the ugly
- The good
- The active state
- The clean state
- Scrubbing and deep scrubbing
- The bad
- The inconsistent state
- The backfilling backfill_wait recovering recovery_wait states
- The degraded state
- Remapped
- The ugly
- The incomplete state
- The down state
- The backfill_toofull state
- Monitoring Ceph with collectd
- Graphite
- Grafana
- collectd
- Deploying collectd with Ansible
- Sample Graphite queries for Ceph
- Number of Up and In OSDs
- Showing most deviant OSD usage
- Total number of IOPs across all OSDs
- Total MBps across all OSDs
- Cluster capacity and usage
- Average latency
- Custom Ceph collectd plugins
- Summary
- Tiering with Ceph
- Tiering versus caching
- How Cephs tiering functionality works
- What is a bloom filter
- Tiering modes
- Writeback
- Forward
- Read-forward
- Proxy
- Read-proxy
- Uses cases
- Creating tiers in Ceph
- Tuning tiering
- Flushing and eviction
- Promotions
- Promotion throttling
- Monitoring parameters
- Tiering with erasure-coded pools
- Alternative caching mechanisms
- Summary
- Tuning Ceph
- Latency
- Benchmarking
- Benchmarking tools
- Fio
- Sysbench
- Ping
- iPerf
- Network benchmarking
- Disk benchmarking
- RADOS benchmarking
- RBD benchmarking
- Recommended tunings
- CPU
- Filestore
- VFS cache pressure
- WBThrottle and/or nr_requests
- Filestore queue throttling
- filestore_queue_low_threshhold
- filestore_queue_high_threshhold
- filestore_expected_throughput_ops
- filestore_queue_high_delay_multiple
- filestore_queue_max_delay_multiple
- PG Splitting
- Scrubbing
- OP priorities
- The Network
- General system tuning
- Kernel RBD
- Queue Depth
- ReadAhead
- PG distributions
- Summary
- Troubleshooting
- Repairing inconsistent objects
- Full OSDs
- Ceph logging
- Slow performance
- Causes
- Increased client workload
- Down OSDs
- Recovery and backfilling
- Scrubbing
- Snaptrimming
- Hardware or driver issues
- Monitoring
- iostat
- htop
- atop
- Diagnostics
- Extremely slow performance or no IO
- Flapping OSDs
- Jumbo frames
- Failing disks
- Slow OSDs
- Investigating PGs in a down state
- Large monitor databases
- Summary
- Disaster Recovery
- What is a disaster?
- Avoiding data loss
- What can cause an outage or data loss?
- RBD mirroring
- The journal
- The rbd-mirror daemon
- Configuring RBD mirroring
- Performing RBD failover
- RBD recovery
- Lost objects and inactive PGs
- Recovering from a complete monitor failure
- Using the Cephs object store tool
- Investigating asserts
- Example assert
- Summary 更新時間:2021-07-09 19:55:46