The current examples have been great for picking out generic things on web pages but the typical use case needs to understand the page format and then get something specific off the page like a table of data. You typically need to use the pretty print facility to view the page contents and then issue a series of tedious calls to march down through the structure and pull out the text. The existing bs4 docs and examples are fine for that.
You can get real far with a very specific problem and code up something that grabs what you need off a page using these hand coded methods easily but the code is proabably going to be throwaway and break as soon as the target url changes the format of their pages.
However ... this is NOT what a true "ScreenScraper" app does. True ScreenScrapers build on bs4 and provide templates for various kinds of web pages and automatically strips out all the crap like inline ads and sidebars and all the "visual cruft" while returning only the "content". The best ones can identify the main images that the page is refering to and the body of the text that is the main subject of the page. There is a project/product called "readability" that did this that was ported to Python but they stopped updating it when it became a "commercial" a product. You can still find the early Python code though that uses a version of bs at: https://github.com/gfxmonk/python-readability
The big players in this area are obviously the search engine companies like Google, Bing, Facebook that have extremely sophisicated methods for disecting web pages and getting at the real "content". The rest of the universe seems to have moved on to using an web API instead that just hands you the required data using a special URL syntax.